00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3686 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.986 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.995 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.005 Checking out Revision 16485855f227725e8e9566ee24d00b82aaeff0db (FETCH_HEAD) 00:00:05.005 > git config core.sparsecheckout # timeout=10 00:00:05.017 > git read-tree -mu HEAD # timeout=10 00:00:05.032 > git checkout -f 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=5 00:00:05.051 Commit message: "ansible/inventory: fix WFP37 mac address" 00:00:05.052 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.142 [Pipeline] Start of Pipeline 00:00:05.167 [Pipeline] library 00:00:05.195 Loading library shm_lib@f2beeebdc9d1f1c6c4d4791bb9c4c36bbeef976c 00:00:07.515 Library shm_lib@f2beeebdc9d1f1c6c4d4791bb9c4c36bbeef976c is cached. Copying from home. 00:00:07.550 [Pipeline] node 00:00:07.617 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.619 [Pipeline] { 00:00:07.631 [Pipeline] catchError 00:00:07.633 [Pipeline] { 00:00:07.648 [Pipeline] wrap 00:00:07.659 [Pipeline] { 00:00:07.667 [Pipeline] stage 00:00:07.668 [Pipeline] { (Prologue) 00:00:07.889 [Pipeline] sh 00:00:08.163 + logger -p user.info -t JENKINS-CI 00:00:08.179 [Pipeline] echo 00:00:08.181 Node: GP11 00:00:08.188 [Pipeline] sh 00:00:08.478 [Pipeline] setCustomBuildProperty 00:00:08.491 [Pipeline] echo 00:00:08.492 Cleanup processes 00:00:08.497 [Pipeline] sh 00:00:08.770 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.770 752847 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.783 [Pipeline] sh 00:00:09.063 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.063 ++ grep -v 'sudo pgrep' 00:00:09.063 ++ awk '{print $1}' 00:00:09.063 + sudo kill -9 00:00:09.063 + true 00:00:09.080 [Pipeline] cleanWs 00:00:09.087 [WS-CLEANUP] Deleting project workspace... 00:00:09.087 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.092 [WS-CLEANUP] done 00:00:09.096 [Pipeline] setCustomBuildProperty 00:00:09.109 [Pipeline] sh 00:00:09.388 + sudo git config --global --replace-all safe.directory '*' 00:00:09.495 [Pipeline] httpRequest 00:00:09.519 [Pipeline] echo 00:00:09.520 Sorcerer 10.211.164.101 is alive 00:00:09.527 [Pipeline] httpRequest 00:00:09.533 HttpMethod: GET 00:00:09.533 URL: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:00:09.536 Sending request to url: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:00:09.542 Response Code: HTTP/1.1 200 OK 00:00:09.542 Success: Status code 200 is in the accepted range: 200,404 00:00:09.543 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:00:11.669 [Pipeline] sh 00:00:11.969 + tar --no-same-owner -xf jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:00:11.982 [Pipeline] httpRequest 00:00:11.996 [Pipeline] echo 00:00:11.998 Sorcerer 10.211.164.101 is alive 00:00:12.006 [Pipeline] httpRequest 00:00:12.010 HttpMethod: GET 00:00:12.010 URL: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:00:12.011 Sending request to url: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:00:12.024 Response Code: HTTP/1.1 200 OK 00:00:12.025 Success: Status code 200 is in the accepted range: 200,404 00:00:12.025 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:00:37.060 [Pipeline] sh 00:00:37.342 + tar --no-same-owner -xf spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:00:40.635 [Pipeline] sh 00:00:40.914 + git -C spdk log --oneline -n5 00:00:40.914 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:00:40.914 89648519b bdev/compress: Output the pm_path entry for bdev_get_bdevs() 00:00:40.914 a1a2e2b48 nvme/pcie: add debug print for number of SGL/PRP entries 00:00:40.914 8b5c4be8b nvme/fio_plugin: add support for the disable_pcie_sgl_merge option 00:00:40.914 e431ba2e4 nvme/pcie: add disable_pcie_sgl_merge option 00:00:40.930 [Pipeline] withCredentials 00:00:40.939 > git --version # timeout=10 00:00:40.950 > git --version # 'git version 2.39.2' 00:00:40.966 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:40.968 [Pipeline] { 00:00:40.978 [Pipeline] retry 00:00:40.980 [Pipeline] { 00:00:40.998 [Pipeline] sh 00:00:41.283 + git ls-remote http://dpdk.org/git/dpdk main 00:00:41.861 [Pipeline] } 00:00:41.883 [Pipeline] // retry 00:00:41.887 [Pipeline] } 00:00:41.911 [Pipeline] // withCredentials 00:00:41.923 [Pipeline] httpRequest 00:00:41.942 [Pipeline] echo 00:00:41.944 Sorcerer 10.211.164.101 is alive 00:00:41.954 [Pipeline] httpRequest 00:00:41.958 HttpMethod: GET 00:00:41.959 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:41.959 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:41.961 Response Code: HTTP/1.1 200 OK 00:00:41.961 Success: Status code 200 is in the accepted range: 200,404 00:00:41.962 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:44.014 [Pipeline] sh 00:00:44.295 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:45.680 [Pipeline] sh 00:00:45.962 + git -C dpdk log --oneline -n5 00:00:45.962 fa8d2f7f28 version: 24.07-rc2 00:00:45.962 d4bc3c2e01 maintainers: update for cxgbe driver 00:00:45.962 2227c0ed9a maintainers: update for Microsoft drivers 00:00:45.962 8385370337 maintainers: update for Arm 00:00:45.962 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:00:45.977 [Pipeline] } 00:00:45.994 [Pipeline] // stage 00:00:46.000 [Pipeline] stage 00:00:46.001 [Pipeline] { (Prepare) 00:00:46.017 [Pipeline] writeFile 00:00:46.030 [Pipeline] sh 00:00:46.306 + logger -p user.info -t JENKINS-CI 00:00:46.359 [Pipeline] sh 00:00:46.640 + logger -p user.info -t JENKINS-CI 00:00:46.651 [Pipeline] sh 00:00:46.931 + cat autorun-spdk.conf 00:00:46.931 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.931 SPDK_TEST_NVMF=1 00:00:46.931 SPDK_TEST_NVME_CLI=1 00:00:46.931 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.931 SPDK_TEST_NVMF_NICS=e810 00:00:46.931 SPDK_TEST_VFIOUSER=1 00:00:46.931 SPDK_RUN_UBSAN=1 00:00:46.931 NET_TYPE=phy 00:00:46.931 SPDK_TEST_NATIVE_DPDK=main 00:00:46.931 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.939 RUN_NIGHTLY=1 00:00:46.945 [Pipeline] readFile 00:00:46.971 [Pipeline] withEnv 00:00:46.972 [Pipeline] { 00:00:46.984 [Pipeline] sh 00:00:47.263 + set -ex 00:00:47.263 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:47.263 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.263 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.263 ++ SPDK_TEST_NVMF=1 00:00:47.263 ++ SPDK_TEST_NVME_CLI=1 00:00:47.263 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.263 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.263 ++ SPDK_TEST_VFIOUSER=1 00:00:47.263 ++ SPDK_RUN_UBSAN=1 00:00:47.263 ++ NET_TYPE=phy 00:00:47.263 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:47.263 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.263 ++ RUN_NIGHTLY=1 00:00:47.263 + case $SPDK_TEST_NVMF_NICS in 00:00:47.263 + DRIVERS=ice 00:00:47.263 + [[ tcp == \r\d\m\a ]] 00:00:47.263 + [[ -n ice ]] 00:00:47.263 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:47.263 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:47.263 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:47.263 rmmod: ERROR: Module irdma is not currently loaded 00:00:47.263 rmmod: ERROR: Module i40iw is not currently loaded 00:00:47.263 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:47.263 + true 00:00:47.263 + for D in $DRIVERS 00:00:47.263 + sudo modprobe ice 00:00:47.263 + exit 0 00:00:47.272 [Pipeline] } 00:00:47.286 [Pipeline] // withEnv 00:00:47.290 [Pipeline] } 00:00:47.305 [Pipeline] // stage 00:00:47.313 [Pipeline] catchError 00:00:47.315 [Pipeline] { 00:00:47.328 [Pipeline] timeout 00:00:47.328 Timeout set to expire in 50 min 00:00:47.329 [Pipeline] { 00:00:47.340 [Pipeline] stage 00:00:47.341 [Pipeline] { (Tests) 00:00:47.352 [Pipeline] sh 00:00:47.628 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.628 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.628 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.628 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:47.628 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.628 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.628 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:47.628 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.628 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.628 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.628 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:47.628 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.628 + source /etc/os-release 00:00:47.628 ++ NAME='Fedora Linux' 00:00:47.628 ++ VERSION='38 (Cloud Edition)' 00:00:47.628 ++ ID=fedora 00:00:47.628 ++ VERSION_ID=38 00:00:47.628 ++ VERSION_CODENAME= 00:00:47.628 ++ PLATFORM_ID=platform:f38 00:00:47.628 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:47.628 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:47.628 ++ LOGO=fedora-logo-icon 00:00:47.628 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:47.629 ++ HOME_URL=https://fedoraproject.org/ 00:00:47.629 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:47.629 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:47.629 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:47.629 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:47.629 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:47.629 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:47.629 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:47.629 ++ SUPPORT_END=2024-05-14 00:00:47.629 ++ VARIANT='Cloud Edition' 00:00:47.629 ++ VARIANT_ID=cloud 00:00:47.629 + uname -a 00:00:47.629 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:47.629 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.562 Hugepages 00:00:48.562 node hugesize free / total 00:00:48.562 node0 1048576kB 0 / 0 00:00:48.562 node0 2048kB 0 / 0 00:00:48.562 node1 1048576kB 0 / 0 00:00:48.562 node1 2048kB 0 / 0 00:00:48.562 00:00:48.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.562 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:48.562 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:48.562 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:48.562 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:48.562 + rm -f /tmp/spdk-ld-path 00:00:48.562 + source autorun-spdk.conf 00:00:48.562 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.562 ++ SPDK_TEST_NVMF=1 00:00:48.562 ++ SPDK_TEST_NVME_CLI=1 00:00:48.562 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.562 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.562 ++ SPDK_TEST_VFIOUSER=1 00:00:48.562 ++ SPDK_RUN_UBSAN=1 00:00:48.562 ++ NET_TYPE=phy 00:00:48.562 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:48.562 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.562 ++ RUN_NIGHTLY=1 00:00:48.562 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.562 + [[ -n '' ]] 00:00:48.562 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.562 + for M in /var/spdk/build-*-manifest.txt 00:00:48.562 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.562 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.562 + for M in /var/spdk/build-*-manifest.txt 00:00:48.562 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.562 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.562 ++ uname 00:00:48.562 + [[ Linux == \L\i\n\u\x ]] 00:00:48.562 + sudo dmesg -T 00:00:48.820 + sudo dmesg --clear 00:00:48.820 + dmesg_pid=753556 00:00:48.820 + [[ Fedora Linux == FreeBSD ]] 00:00:48.820 + sudo dmesg -Tw 00:00:48.820 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.820 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.820 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.820 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.820 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.820 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.820 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.820 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.820 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.820 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.820 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.820 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.820 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.820 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.820 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.820 Test configuration: 00:00:48.820 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.820 SPDK_TEST_NVMF=1 00:00:48.820 SPDK_TEST_NVME_CLI=1 00:00:48.820 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.820 SPDK_TEST_NVMF_NICS=e810 00:00:48.820 SPDK_TEST_VFIOUSER=1 00:00:48.820 SPDK_RUN_UBSAN=1 00:00:48.820 NET_TYPE=phy 00:00:48.820 SPDK_TEST_NATIVE_DPDK=main 00:00:48.820 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.820 RUN_NIGHTLY=1 11:55:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.820 11:55:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.820 11:55:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.820 11:55:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.820 11:55:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.820 11:55:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.820 11:55:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.820 11:55:56 -- paths/export.sh@5 -- $ export PATH 00:00:48.821 11:55:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.821 11:55:56 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.821 11:55:56 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:48.821 11:55:56 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721642156.XXXXXX 00:00:48.821 11:55:56 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721642156.71NF8p 00:00:48.821 11:55:56 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:48.821 11:55:56 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:00:48.821 11:55:56 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.821 11:55:56 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:48.821 11:55:56 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.821 11:55:56 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.821 11:55:56 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:48.821 11:55:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:48.821 11:55:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.821 11:55:56 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:48.821 11:55:56 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:48.821 11:55:56 -- pm/common@17 -- $ local monitor 00:00:48.821 11:55:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.821 11:55:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.821 11:55:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.821 11:55:56 -- pm/common@21 -- $ date +%s 00:00:48.821 11:55:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.821 11:55:56 -- pm/common@21 -- $ date +%s 00:00:48.821 11:55:56 -- pm/common@25 -- $ sleep 1 00:00:48.821 11:55:56 -- pm/common@21 -- $ date +%s 00:00:48.821 11:55:56 -- pm/common@21 -- $ date +%s 00:00:48.821 11:55:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721642156 00:00:48.821 11:55:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721642156 00:00:48.821 11:55:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721642156 00:00:48.821 11:55:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721642156 00:00:48.821 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721642156_collect-vmstat.pm.log 00:00:48.821 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721642156_collect-cpu-load.pm.log 00:00:48.821 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721642156_collect-cpu-temp.pm.log 00:00:48.821 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721642156_collect-bmc-pm.bmc.pm.log 00:00:49.753 11:55:57 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:49.753 11:55:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.753 11:55:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.753 11:55:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.753 11:55:57 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.753 Mon Jul 22 09:55:57 AM UTC 2024 00:00:49.753 11:55:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.753 v24.09-pre-259-g8fb860b73 00:00:49.753 11:55:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.753 11:55:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.753 11:55:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.753 11:55:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:49.753 11:55:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:49.753 11:55:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.753 ************************************ 00:00:49.753 START TEST ubsan 00:00:49.753 ************************************ 00:00:49.753 11:55:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:49.753 using ubsan 00:00:49.753 00:00:49.753 real 0m0.000s 00:00:49.753 user 0m0.000s 00:00:49.753 sys 0m0.000s 00:00:49.753 11:55:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:49.753 11:55:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.753 ************************************ 00:00:49.753 END TEST ubsan 00:00:49.753 ************************************ 00:00:49.753 11:55:57 -- common/autotest_common.sh@1142 -- $ return 0 00:00:49.753 11:55:57 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:00:49.753 11:55:57 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:49.753 11:55:57 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:49.753 11:55:57 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:49.753 11:55:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:49.753 11:55:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.011 ************************************ 00:00:50.011 START TEST build_native_dpdk 00:00:50.011 ************************************ 00:00:50.011 11:55:57 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:50.011 11:55:57 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:50.012 fa8d2f7f28 version: 24.07-rc2 00:00:50.012 d4bc3c2e01 maintainers: update for cxgbe driver 00:00:50.012 2227c0ed9a maintainers: update for Microsoft drivers 00:00:50.012 8385370337 maintainers: update for Arm 00:00:50.012 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:50.012 patching file config/rte_config.h 00:00:50.012 Hunk #1 succeeded at 70 (offset 11 lines). 00:00:50.012 11:55:57 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc2 24.07.0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 24.07.0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc2 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc2 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc2 =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^0x ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^[a-f0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:50.012 11:55:57 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:00:50.013 11:55:57 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:00:50.013 11:55:57 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:50.013 11:55:57 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:00:50.013 11:55:57 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:50.013 11:55:57 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:50.013 11:55:57 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:54.190 The Meson build system 00:00:54.190 Version: 1.3.1 00:00:54.190 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:54.190 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:54.190 Build type: native build 00:00:54.190 Program cat found: YES (/usr/bin/cat) 00:00:54.190 Project name: DPDK 00:00:54.190 Project version: 24.07.0-rc2 00:00:54.190 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:54.190 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:54.190 Host machine cpu family: x86_64 00:00:54.190 Host machine cpu: x86_64 00:00:54.190 Message: ## Building in Developer Mode ## 00:00:54.190 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:54.190 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:54.190 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:54.190 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:00:54.190 Program cat found: YES (/usr/bin/cat) 00:00:54.190 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:54.190 Compiler for C supports arguments -march=native: YES 00:00:54.190 Checking for size of "void *" : 8 00:00:54.190 Checking for size of "void *" : 8 (cached) 00:00:54.190 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:00:54.190 Library m found: YES 00:00:54.190 Library numa found: YES 00:00:54.190 Has header "numaif.h" : YES 00:00:54.190 Library fdt found: NO 00:00:54.190 Library execinfo found: NO 00:00:54.190 Has header "execinfo.h" : YES 00:00:54.190 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:54.190 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:54.190 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:54.190 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:54.190 Run-time dependency openssl found: YES 3.0.9 00:00:54.190 Run-time dependency libpcap found: YES 1.10.4 00:00:54.190 Has header "pcap.h" with dependency libpcap: YES 00:00:54.190 Compiler for C supports arguments -Wcast-qual: YES 00:00:54.190 Compiler for C supports arguments -Wdeprecated: YES 00:00:54.190 Compiler for C supports arguments -Wformat: YES 00:00:54.190 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:54.190 Compiler for C supports arguments -Wformat-security: NO 00:00:54.190 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:54.190 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:54.190 Compiler for C supports arguments -Wnested-externs: YES 00:00:54.190 Compiler for C supports arguments -Wold-style-definition: YES 00:00:54.190 Compiler for C supports arguments -Wpointer-arith: YES 00:00:54.190 Compiler for C supports arguments -Wsign-compare: YES 00:00:54.190 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:54.190 Compiler for C supports arguments -Wundef: YES 00:00:54.190 Compiler for C supports arguments -Wwrite-strings: YES 00:00:54.190 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:54.190 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:54.190 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:54.190 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:54.190 Program objdump found: YES (/usr/bin/objdump) 00:00:54.190 Compiler for C supports arguments -mavx512f: YES 00:00:54.190 Checking if "AVX512 checking" compiles: YES 00:00:54.191 Fetching value of define "__SSE4_2__" : 1 00:00:54.191 Fetching value of define "__AES__" : 1 00:00:54.191 Fetching value of define "__AVX__" : 1 00:00:54.191 Fetching value of define "__AVX2__" : (undefined) 00:00:54.191 Fetching value of define "__AVX512BW__" : (undefined) 00:00:54.191 Fetching value of define "__AVX512CD__" : (undefined) 00:00:54.191 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:54.191 Fetching value of define "__AVX512F__" : (undefined) 00:00:54.191 Fetching value of define "__AVX512VL__" : (undefined) 00:00:54.191 Fetching value of define "__PCLMUL__" : 1 00:00:54.191 Fetching value of define "__RDRND__" : 1 00:00:54.191 Fetching value of define "__RDSEED__" : (undefined) 00:00:54.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:54.191 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:54.191 Message: lib/log: Defining dependency "log" 00:00:54.191 Message: lib/kvargs: Defining dependency "kvargs" 00:00:54.191 Message: lib/argparse: Defining dependency "argparse" 00:00:54.191 Message: lib/telemetry: Defining dependency "telemetry" 00:00:54.191 Checking for function "getentropy" : NO 00:00:54.191 Message: lib/eal: Defining dependency "eal" 00:00:54.191 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:00:54.191 Message: lib/ring: Defining dependency "ring" 00:00:54.191 Message: lib/rcu: Defining dependency "rcu" 00:00:54.191 Message: lib/mempool: Defining dependency "mempool" 00:00:54.191 Message: lib/mbuf: Defining dependency "mbuf" 00:00:54.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:54.191 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:54.191 Compiler for C supports arguments -mpclmul: YES 00:00:54.191 Compiler for C supports arguments -maes: YES 00:00:54.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:54.191 Compiler for C supports arguments -mavx512bw: YES 00:00:54.191 Compiler for C supports arguments -mavx512dq: YES 00:00:54.191 Compiler for C supports arguments -mavx512vl: YES 00:00:54.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:54.191 Compiler for C supports arguments -mavx2: YES 00:00:54.191 Compiler for C supports arguments -mavx: YES 00:00:54.191 Message: lib/net: Defining dependency "net" 00:00:54.191 Message: lib/meter: Defining dependency "meter" 00:00:54.191 Message: lib/ethdev: Defining dependency "ethdev" 00:00:54.191 Message: lib/pci: Defining dependency "pci" 00:00:54.191 Message: lib/cmdline: Defining dependency "cmdline" 00:00:54.191 Message: lib/metrics: Defining dependency "metrics" 00:00:54.191 Message: lib/hash: Defining dependency "hash" 00:00:54.191 Message: lib/timer: Defining dependency "timer" 00:00:54.191 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:54.191 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:00:54.191 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:00:54.191 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:00:54.191 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:00:54.191 Message: lib/acl: Defining dependency "acl" 00:00:54.191 Message: lib/bbdev: Defining dependency "bbdev" 00:00:54.191 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:54.191 Run-time dependency libelf found: YES 0.190 00:00:54.191 Message: lib/bpf: Defining dependency "bpf" 00:00:54.191 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:54.191 Message: lib/compressdev: Defining dependency "compressdev" 00:00:54.191 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:54.191 Message: lib/distributor: Defining dependency "distributor" 00:00:54.191 Message: lib/dmadev: Defining dependency "dmadev" 00:00:54.191 Message: lib/efd: Defining dependency "efd" 00:00:54.191 Message: lib/eventdev: Defining dependency "eventdev" 00:00:54.191 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:54.191 Message: lib/gpudev: Defining dependency "gpudev" 00:00:54.191 Message: lib/gro: Defining dependency "gro" 00:00:54.191 Message: lib/gso: Defining dependency "gso" 00:00:54.191 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:54.191 Message: lib/jobstats: Defining dependency "jobstats" 00:00:54.191 Message: lib/latencystats: Defining dependency "latencystats" 00:00:54.191 Message: lib/lpm: Defining dependency "lpm" 00:00:54.191 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:54.191 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:54.191 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:54.191 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:54.191 Message: lib/member: Defining dependency "member" 00:00:54.191 Message: lib/pcapng: Defining dependency "pcapng" 00:00:54.191 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:54.191 Message: lib/power: Defining dependency "power" 00:00:54.191 Message: lib/rawdev: Defining dependency "rawdev" 00:00:54.191 Message: lib/regexdev: Defining dependency "regexdev" 00:00:54.191 Message: lib/mldev: Defining dependency "mldev" 00:00:54.191 Message: lib/rib: Defining dependency "rib" 00:00:54.191 Message: lib/reorder: Defining dependency "reorder" 00:00:54.191 Message: lib/sched: Defining dependency "sched" 00:00:54.191 Message: lib/security: Defining dependency "security" 00:00:54.191 Message: lib/stack: Defining dependency "stack" 00:00:54.191 Has header "linux/userfaultfd.h" : YES 00:00:54.191 Has header "linux/vduse.h" : YES 00:00:54.191 Message: lib/vhost: Defining dependency "vhost" 00:00:54.191 Message: lib/ipsec: Defining dependency "ipsec" 00:00:54.191 Message: lib/pdcp: Defining dependency "pdcp" 00:00:54.191 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:54.191 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:54.191 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:00:54.191 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:54.191 Message: lib/fib: Defining dependency "fib" 00:00:54.191 Message: lib/port: Defining dependency "port" 00:00:54.191 Message: lib/pdump: Defining dependency "pdump" 00:00:54.191 Message: lib/table: Defining dependency "table" 00:00:54.191 Message: lib/pipeline: Defining dependency "pipeline" 00:00:54.191 Message: lib/graph: Defining dependency "graph" 00:00:54.191 Message: lib/node: Defining dependency "node" 00:00:55.571 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:55.571 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:55.571 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:55.571 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:55.571 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:55.571 Compiler for C supports arguments -Wno-unused-value: YES 00:00:55.571 Compiler for C supports arguments -Wno-format: YES 00:00:55.571 Compiler for C supports arguments -Wno-format-security: YES 00:00:55.571 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:55.571 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:55.571 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:55.571 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:55.571 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:55.571 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:55.571 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:55.571 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:55.571 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:55.571 Has header "sys/epoll.h" : YES 00:00:55.571 Program doxygen found: YES (/usr/bin/doxygen) 00:00:55.571 Configuring doxy-api-html.conf using configuration 00:00:55.571 Configuring doxy-api-man.conf using configuration 00:00:55.571 Program mandb found: YES (/usr/bin/mandb) 00:00:55.571 Program sphinx-build found: NO 00:00:55.571 Configuring rte_build_config.h using configuration 00:00:55.571 Message: 00:00:55.571 ================= 00:00:55.571 Applications Enabled 00:00:55.571 ================= 00:00:55.571 00:00:55.571 apps: 00:00:55.571 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:55.571 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:55.571 test-pmd, test-regex, test-sad, test-security-perf, 00:00:55.571 00:00:55.571 Message: 00:00:55.571 ================= 00:00:55.571 Libraries Enabled 00:00:55.571 ================= 00:00:55.571 00:00:55.571 libs: 00:00:55.571 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:00:55.571 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:00:55.571 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:00:55.571 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:00:55.571 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:00:55.571 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:00:55.571 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:00:55.571 graph, node, 00:00:55.571 00:00:55.571 Message: 00:00:55.571 =============== 00:00:55.571 Drivers Enabled 00:00:55.571 =============== 00:00:55.571 00:00:55.571 common: 00:00:55.571 00:00:55.571 bus: 00:00:55.572 pci, vdev, 00:00:55.572 mempool: 00:00:55.572 ring, 00:00:55.572 dma: 00:00:55.572 00:00:55.572 net: 00:00:55.572 i40e, 00:00:55.572 raw: 00:00:55.572 00:00:55.572 crypto: 00:00:55.572 00:00:55.572 compress: 00:00:55.572 00:00:55.572 regex: 00:00:55.572 00:00:55.572 ml: 00:00:55.572 00:00:55.572 vdpa: 00:00:55.572 00:00:55.572 event: 00:00:55.572 00:00:55.572 baseband: 00:00:55.572 00:00:55.572 gpu: 00:00:55.572 00:00:55.572 00:00:55.572 Message: 00:00:55.572 ================= 00:00:55.572 Content Skipped 00:00:55.572 ================= 00:00:55.572 00:00:55.572 apps: 00:00:55.572 00:00:55.572 libs: 00:00:55.572 00:00:55.572 drivers: 00:00:55.572 common/cpt: not in enabled drivers build config 00:00:55.572 common/dpaax: not in enabled drivers build config 00:00:55.572 common/iavf: not in enabled drivers build config 00:00:55.572 common/idpf: not in enabled drivers build config 00:00:55.572 common/ionic: not in enabled drivers build config 00:00:55.572 common/mvep: not in enabled drivers build config 00:00:55.572 common/octeontx: not in enabled drivers build config 00:00:55.572 bus/auxiliary: not in enabled drivers build config 00:00:55.572 bus/cdx: not in enabled drivers build config 00:00:55.572 bus/dpaa: not in enabled drivers build config 00:00:55.572 bus/fslmc: not in enabled drivers build config 00:00:55.572 bus/ifpga: not in enabled drivers build config 00:00:55.572 bus/platform: not in enabled drivers build config 00:00:55.572 bus/uacce: not in enabled drivers build config 00:00:55.572 bus/vmbus: not in enabled drivers build config 00:00:55.572 common/cnxk: not in enabled drivers build config 00:00:55.572 common/mlx5: not in enabled drivers build config 00:00:55.572 common/nfp: not in enabled drivers build config 00:00:55.572 common/nitrox: not in enabled drivers build config 00:00:55.572 common/qat: not in enabled drivers build config 00:00:55.572 common/sfc_efx: not in enabled drivers build config 00:00:55.572 mempool/bucket: not in enabled drivers build config 00:00:55.572 mempool/cnxk: not in enabled drivers build config 00:00:55.572 mempool/dpaa: not in enabled drivers build config 00:00:55.572 mempool/dpaa2: not in enabled drivers build config 00:00:55.572 mempool/octeontx: not in enabled drivers build config 00:00:55.572 mempool/stack: not in enabled drivers build config 00:00:55.572 dma/cnxk: not in enabled drivers build config 00:00:55.572 dma/dpaa: not in enabled drivers build config 00:00:55.572 dma/dpaa2: not in enabled drivers build config 00:00:55.572 dma/hisilicon: not in enabled drivers build config 00:00:55.572 dma/idxd: not in enabled drivers build config 00:00:55.572 dma/ioat: not in enabled drivers build config 00:00:55.572 dma/odm: not in enabled drivers build config 00:00:55.572 dma/skeleton: not in enabled drivers build config 00:00:55.572 net/af_packet: not in enabled drivers build config 00:00:55.572 net/af_xdp: not in enabled drivers build config 00:00:55.572 net/ark: not in enabled drivers build config 00:00:55.572 net/atlantic: not in enabled drivers build config 00:00:55.572 net/avp: not in enabled drivers build config 00:00:55.572 net/axgbe: not in enabled drivers build config 00:00:55.572 net/bnx2x: not in enabled drivers build config 00:00:55.572 net/bnxt: not in enabled drivers build config 00:00:55.572 net/bonding: not in enabled drivers build config 00:00:55.572 net/cnxk: not in enabled drivers build config 00:00:55.572 net/cpfl: not in enabled drivers build config 00:00:55.572 net/cxgbe: not in enabled drivers build config 00:00:55.572 net/dpaa: not in enabled drivers build config 00:00:55.572 net/dpaa2: not in enabled drivers build config 00:00:55.572 net/e1000: not in enabled drivers build config 00:00:55.572 net/ena: not in enabled drivers build config 00:00:55.572 net/enetc: not in enabled drivers build config 00:00:55.572 net/enetfec: not in enabled drivers build config 00:00:55.572 net/enic: not in enabled drivers build config 00:00:55.572 net/failsafe: not in enabled drivers build config 00:00:55.572 net/fm10k: not in enabled drivers build config 00:00:55.572 net/gve: not in enabled drivers build config 00:00:55.572 net/hinic: not in enabled drivers build config 00:00:55.572 net/hns3: not in enabled drivers build config 00:00:55.572 net/iavf: not in enabled drivers build config 00:00:55.572 net/ice: not in enabled drivers build config 00:00:55.572 net/idpf: not in enabled drivers build config 00:00:55.572 net/igc: not in enabled drivers build config 00:00:55.572 net/ionic: not in enabled drivers build config 00:00:55.572 net/ipn3ke: not in enabled drivers build config 00:00:55.572 net/ixgbe: not in enabled drivers build config 00:00:55.572 net/mana: not in enabled drivers build config 00:00:55.572 net/memif: not in enabled drivers build config 00:00:55.572 net/mlx4: not in enabled drivers build config 00:00:55.572 net/mlx5: not in enabled drivers build config 00:00:55.572 net/mvneta: not in enabled drivers build config 00:00:55.572 net/mvpp2: not in enabled drivers build config 00:00:55.572 net/netvsc: not in enabled drivers build config 00:00:55.572 net/nfb: not in enabled drivers build config 00:00:55.572 net/nfp: not in enabled drivers build config 00:00:55.572 net/ngbe: not in enabled drivers build config 00:00:55.572 net/null: not in enabled drivers build config 00:00:55.572 net/octeontx: not in enabled drivers build config 00:00:55.572 net/octeon_ep: not in enabled drivers build config 00:00:55.572 net/pcap: not in enabled drivers build config 00:00:55.572 net/pfe: not in enabled drivers build config 00:00:55.572 net/qede: not in enabled drivers build config 00:00:55.572 net/ring: not in enabled drivers build config 00:00:55.572 net/sfc: not in enabled drivers build config 00:00:55.572 net/softnic: not in enabled drivers build config 00:00:55.572 net/tap: not in enabled drivers build config 00:00:55.572 net/thunderx: not in enabled drivers build config 00:00:55.572 net/txgbe: not in enabled drivers build config 00:00:55.572 net/vdev_netvsc: not in enabled drivers build config 00:00:55.572 net/vhost: not in enabled drivers build config 00:00:55.572 net/virtio: not in enabled drivers build config 00:00:55.572 net/vmxnet3: not in enabled drivers build config 00:00:55.572 raw/cnxk_bphy: not in enabled drivers build config 00:00:55.572 raw/cnxk_gpio: not in enabled drivers build config 00:00:55.572 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:55.572 raw/ifpga: not in enabled drivers build config 00:00:55.572 raw/ntb: not in enabled drivers build config 00:00:55.572 raw/skeleton: not in enabled drivers build config 00:00:55.572 crypto/armv8: not in enabled drivers build config 00:00:55.572 crypto/bcmfs: not in enabled drivers build config 00:00:55.572 crypto/caam_jr: not in enabled drivers build config 00:00:55.572 crypto/ccp: not in enabled drivers build config 00:00:55.572 crypto/cnxk: not in enabled drivers build config 00:00:55.572 crypto/dpaa_sec: not in enabled drivers build config 00:00:55.572 crypto/dpaa2_sec: not in enabled drivers build config 00:00:55.572 crypto/ionic: not in enabled drivers build config 00:00:55.572 crypto/ipsec_mb: not in enabled drivers build config 00:00:55.572 crypto/mlx5: not in enabled drivers build config 00:00:55.572 crypto/mvsam: not in enabled drivers build config 00:00:55.572 crypto/nitrox: not in enabled drivers build config 00:00:55.572 crypto/null: not in enabled drivers build config 00:00:55.572 crypto/octeontx: not in enabled drivers build config 00:00:55.572 crypto/openssl: not in enabled drivers build config 00:00:55.572 crypto/scheduler: not in enabled drivers build config 00:00:55.572 crypto/uadk: not in enabled drivers build config 00:00:55.572 crypto/virtio: not in enabled drivers build config 00:00:55.572 compress/isal: not in enabled drivers build config 00:00:55.572 compress/mlx5: not in enabled drivers build config 00:00:55.572 compress/nitrox: not in enabled drivers build config 00:00:55.572 compress/octeontx: not in enabled drivers build config 00:00:55.572 compress/uadk: not in enabled drivers build config 00:00:55.572 compress/zlib: not in enabled drivers build config 00:00:55.572 regex/mlx5: not in enabled drivers build config 00:00:55.572 regex/cn9k: not in enabled drivers build config 00:00:55.572 ml/cnxk: not in enabled drivers build config 00:00:55.572 vdpa/ifc: not in enabled drivers build config 00:00:55.572 vdpa/mlx5: not in enabled drivers build config 00:00:55.572 vdpa/nfp: not in enabled drivers build config 00:00:55.572 vdpa/sfc: not in enabled drivers build config 00:00:55.572 event/cnxk: not in enabled drivers build config 00:00:55.572 event/dlb2: not in enabled drivers build config 00:00:55.572 event/dpaa: not in enabled drivers build config 00:00:55.572 event/dpaa2: not in enabled drivers build config 00:00:55.572 event/dsw: not in enabled drivers build config 00:00:55.572 event/opdl: not in enabled drivers build config 00:00:55.572 event/skeleton: not in enabled drivers build config 00:00:55.572 event/sw: not in enabled drivers build config 00:00:55.572 event/octeontx: not in enabled drivers build config 00:00:55.572 baseband/acc: not in enabled drivers build config 00:00:55.572 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:55.572 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:55.572 baseband/la12xx: not in enabled drivers build config 00:00:55.572 baseband/null: not in enabled drivers build config 00:00:55.572 baseband/turbo_sw: not in enabled drivers build config 00:00:55.572 gpu/cuda: not in enabled drivers build config 00:00:55.572 00:00:55.572 00:00:55.572 Build targets in project: 224 00:00:55.572 00:00:55.572 DPDK 24.07.0-rc2 00:00:55.572 00:00:55.572 User defined options 00:00:55.572 libdir : lib 00:00:55.572 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.572 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:55.572 c_link_args : 00:00:55.572 enable_docs : false 00:00:55.572 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:55.572 enable_kmods : false 00:00:55.572 machine : native 00:00:55.572 tests : false 00:00:55.572 00:00:55.572 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:55.572 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:55.572 11:56:03 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:00:55.572 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:55.572 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:55.572 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:55.572 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:55.572 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:55.572 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:55.572 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:55.572 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:55.572 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:55.572 [9/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:55.572 [10/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:55.573 [11/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:55.573 [12/723] Linking static target lib/librte_kvargs.a 00:00:55.830 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:55.830 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:55.830 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:55.830 [16/723] Linking static target lib/librte_log.a 00:00:56.090 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:00:56.090 [18/723] Linking static target lib/librte_argparse.a 00:00:56.090 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.351 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.614 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:56.614 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:56.614 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:56.614 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:56.614 [25/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:56.614 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:56.614 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:56.614 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:56.614 [29/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.614 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:56.614 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:56.614 [32/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:56.614 [33/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:56.614 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:56.614 [35/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:56.614 [36/723] Linking target lib/librte_log.so.24.2 00:00:56.614 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:56.614 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:56.614 [39/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:56.614 [40/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:56.614 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:56.614 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:56.614 [43/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:56.872 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:56.872 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:56.872 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:56.872 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:56.872 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:56.872 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:56.872 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:56.872 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:56.872 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:56.872 [53/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:56.872 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:56.872 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:56.872 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:56.872 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:56.872 [58/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:00:56.872 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:56.872 [60/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:56.872 [61/723] Linking target lib/librte_kvargs.so.24.2 00:00:56.872 [62/723] Linking target lib/librte_argparse.so.24.2 00:00:57.135 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:57.135 [64/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:57.135 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:57.135 [66/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:00:57.135 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:57.392 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:57.392 [69/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:57.392 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:57.392 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:57.392 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:57.654 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:57.654 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:57.654 [75/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:57.654 [76/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:57.654 [77/723] Linking static target lib/librte_pci.a 00:00:57.654 [78/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:00:57.654 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:57.654 [80/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:57.918 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:57.918 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:57.918 [83/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:57.918 [84/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:57.918 [85/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:57.918 [86/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:57.918 [87/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:57.918 [88/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:57.918 [89/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:57.918 [90/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:57.918 [91/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:57.918 [92/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.918 [93/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:57.918 [94/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:57.918 [95/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:57.918 [96/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:57.918 [97/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:57.918 [98/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:57.918 [99/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:57.918 [100/723] Linking static target lib/librte_ring.a 00:00:57.918 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:58.185 [102/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:58.185 [103/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:58.185 [104/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:58.185 [105/723] Linking static target lib/librte_meter.a 00:00:58.185 [106/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:58.185 [107/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:58.185 [108/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:58.185 [109/723] Linking static target lib/librte_telemetry.a 00:00:58.185 [110/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:58.185 [111/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:58.185 [112/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:58.185 [113/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:58.185 [114/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:58.185 [115/723] Linking static target lib/librte_net.a 00:00:58.443 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:58.443 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:58.443 [118/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.443 [119/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.443 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:58.443 [121/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:58.443 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:58.443 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:58.702 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:58.702 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:00:58.702 [126/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.702 [127/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:58.702 [128/723] Linking static target lib/librte_mempool.a 00:00:58.702 [129/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:58.964 [130/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.964 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:58.964 [132/723] Linking target lib/librte_telemetry.so.24.2 00:00:58.964 [133/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:58.964 [134/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:58.964 [135/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:58.964 [136/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:58.964 [137/723] Linking static target lib/librte_cmdline.a 00:00:58.964 [138/723] Linking static target lib/librte_eal.a 00:00:58.964 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:00:58.964 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:00:59.227 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:59.227 [142/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:00:59.227 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:59.227 [144/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:59.227 [145/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:59.227 [146/723] Linking static target lib/librte_cfgfile.a 00:00:59.227 [147/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:59.227 [148/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:59.227 [149/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:59.227 [150/723] Linking static target lib/librte_metrics.a 00:00:59.227 [151/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:59.484 [152/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:59.484 [153/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:59.484 [154/723] Linking static target lib/librte_rcu.a 00:00:59.484 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:59.484 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:59.484 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:59.484 [158/723] Linking static target lib/librte_bitratestats.a 00:00:59.748 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:59.748 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:59.748 [161/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:59.748 [162/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:59.748 [163/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.748 [164/723] Linking static target lib/librte_mbuf.a 00:00:59.748 [165/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:59.748 [166/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.748 [167/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:59.748 [168/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:59.748 [169/723] Linking static target lib/librte_timer.a 00:00:59.748 [170/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.006 [171/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.006 [172/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:00.006 [173/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.006 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:00.006 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:00.006 [176/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:00.270 [177/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:00.271 [178/723] Linking static target lib/librte_bbdev.a 00:01:00.271 [179/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:00.271 [180/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:00.271 [181/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.271 [182/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:00.271 [183/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:00.271 [184/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:00.271 [185/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:00.271 [186/723] Linking static target lib/librte_compressdev.a 00:01:00.271 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:00.271 [188/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.532 [189/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:00.532 [190/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:00.533 [191/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:00.533 [192/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:00.533 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.103 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:01.103 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:01.103 [196/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:01.103 [197/723] Linking static target lib/librte_distributor.a 00:01:01.103 [198/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:01.103 [199/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.103 [200/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:01.103 [201/723] Linking static target lib/librte_dmadev.a 00:01:01.103 [202/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.360 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:01.360 [204/723] Linking static target lib/librte_bpf.a 00:01:01.360 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:01.360 [206/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:01.360 [207/723] Linking static target lib/librte_dispatcher.a 00:01:01.360 [208/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:01.360 [209/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:01.360 [210/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:01.360 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:01.360 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:01.360 [213/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:01.620 [214/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:01.620 [215/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:01.620 [216/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.620 [217/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:01.620 [218/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:01.620 [219/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:01.620 [220/723] Linking static target lib/librte_gpudev.a 00:01:01.620 [221/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:01.620 [222/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:01.621 [223/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:01.621 [224/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:01.621 [225/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:01.621 [226/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:01.621 [227/723] Linking static target lib/librte_gro.a 00:01:01.621 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:01.621 [229/723] Linking static target lib/librte_jobstats.a 00:01:01.621 [230/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:01.882 [231/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.882 [232/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:01.882 [233/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.882 [234/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:01.882 [235/723] Linking static target lib/librte_gso.a 00:01:01.882 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:01.882 [237/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:02.145 [238/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.146 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:02.146 [240/723] Linking static target lib/librte_latencystats.a 00:01:02.146 [241/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.146 [242/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:02.146 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:02.146 [244/723] Linking static target lib/librte_ip_frag.a 00:01:02.146 [245/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.146 [246/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.146 [247/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:02.406 [248/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:02.406 [249/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:02.406 [250/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:02.406 [251/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:02.406 [252/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:02.406 [253/723] Linking static target lib/librte_efd.a 00:01:02.406 [254/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:02.406 [255/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:02.406 [256/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.667 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:02.667 [258/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:02.667 [259/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:02.667 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:02.667 [261/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.668 [262/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:02.930 [263/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.930 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:02.930 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:02.930 [266/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:02.930 [267/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:02.930 [268/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:02.930 [269/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.930 [270/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:03.192 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:03.192 [272/723] Linking static target lib/librte_regexdev.a 00:01:03.192 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:03.192 [274/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:03.192 [275/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:03.192 [276/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:03.192 [277/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:03.192 [278/723] Linking static target lib/librte_rawdev.a 00:01:03.192 [279/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:03.452 [280/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:03.452 [281/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:03.452 [282/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:03.452 [283/723] Linking static target lib/librte_pcapng.a 00:01:03.452 [284/723] Linking static target lib/librte_power.a 00:01:03.452 [285/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:03.452 [286/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:03.452 [287/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:03.452 [288/723] Linking static target lib/librte_mldev.a 00:01:03.452 [289/723] Linking static target lib/librte_lpm.a 00:01:03.452 [290/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:03.452 [291/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:03.452 [292/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:03.452 [293/723] Linking static target lib/librte_stack.a 00:01:03.716 [294/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:03.716 [295/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:03.716 [296/723] Linking static target lib/acl/libavx2_tmp.a 00:01:03.716 [297/723] Linking static target lib/librte_reorder.a 00:01:03.716 [298/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:03.716 [299/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:03.716 [300/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.716 [301/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:03.716 [302/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.974 [303/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:03.974 [304/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:03.974 [305/723] Linking static target lib/librte_security.a 00:01:03.974 [306/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:03.974 [307/723] Linking static target lib/librte_cryptodev.a 00:01:03.974 [308/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.974 [309/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:03.974 [310/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.974 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:03.974 [312/723] Linking static target lib/librte_hash.a 00:01:04.236 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:04.236 [314/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:04.236 [315/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.236 [316/723] Linking static target lib/librte_rib.a 00:01:04.236 [317/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:04.236 [318/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:04.236 [319/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:04.236 [320/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.236 [321/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:04.236 [322/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.236 [323/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:04.236 [324/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:04.513 [325/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:04.513 [326/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:04.513 [327/723] Linking static target lib/acl/libavx512_tmp.a 00:01:04.513 [328/723] Linking static target lib/librte_acl.a 00:01:04.513 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:04.513 [330/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:04.513 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:04.513 [332/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:04.513 [333/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:04.513 [334/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.513 [335/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:04.513 [336/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:04.773 [337/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:04.773 [338/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:04.773 [339/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:05.037 [340/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.037 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.037 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:05.037 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.301 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:05.559 [345/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:05.559 [346/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:05.559 [347/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:05.559 [348/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:05.559 [349/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:05.559 [350/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:05.559 [351/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:05.822 [352/723] Linking static target lib/librte_eventdev.a 00:01:05.822 [353/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:05.822 [354/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:05.822 [355/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:05.822 [356/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.822 [357/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:05.822 [358/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:05.822 [359/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:05.822 [360/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:05.822 [361/723] Linking static target lib/librte_sched.a 00:01:05.822 [362/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:05.822 [363/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:05.822 [364/723] Linking static target lib/librte_member.a 00:01:05.822 [365/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:05.822 [366/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:06.087 [367/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.087 [368/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:06.087 [369/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:06.087 [370/723] Linking static target lib/librte_fib.a 00:01:06.087 [371/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:06.087 [372/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:06.087 [373/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:06.088 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:06.088 [375/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:06.088 [376/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:06.350 [377/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:06.350 [378/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:06.351 [379/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:06.351 [380/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:06.351 [381/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.351 [382/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:06.351 [383/723] Linking static target lib/librte_ethdev.a 00:01:06.351 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:06.351 [385/723] Linking static target lib/librte_ipsec.a 00:01:06.351 [386/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:06.611 [387/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.612 [388/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.612 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:06.612 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:06.874 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:06.874 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:06.874 [393/723] Linking static target lib/librte_pdump.a 00:01:06.874 [394/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:06.874 [395/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.874 [396/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:06.874 [397/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:07.134 [398/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:07.134 [399/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.134 [400/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:07.134 [401/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:07.134 [402/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:07.134 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:07.134 [404/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:07.134 [405/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:07.134 [406/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:07.400 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:07.400 [408/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.400 [409/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:07.400 [410/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:07.400 [411/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:07.400 [412/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:07.400 [413/723] Linking static target lib/librte_pdcp.a 00:01:07.400 [414/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:07.400 [415/723] Linking static target lib/librte_table.a 00:01:07.662 [416/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:07.662 [417/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:07.662 [418/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:07.662 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:07.662 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:07.921 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:07.921 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:07.921 [423/723] Linking static target lib/librte_graph.a 00:01:07.921 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.921 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.182 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.182 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.182 [428/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:08.182 [429/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:08.182 [430/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:08.182 [431/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:08.182 [432/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:08.457 [433/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:08.457 [434/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.457 [435/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.457 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:08.457 [437/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:08.457 [438/723] Linking static target lib/librte_port.a 00:01:08.457 [439/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:08.457 [440/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.720 [441/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:08.720 [442/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.720 [443/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.720 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.720 [445/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.720 [446/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:08.720 [447/723] Linking static target drivers/librte_bus_vdev.a 00:01:08.982 [448/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.982 [449/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:08.982 [450/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:08.982 [451/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.982 [452/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.982 [453/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:08.982 [454/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:08.982 [455/723] Linking static target lib/librte_node.a 00:01:09.244 [456/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:09.244 [457/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:09.244 [458/723] Linking static target drivers/librte_bus_pci.a 00:01:09.244 [459/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:09.244 [460/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.244 [461/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:09.244 [462/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:09.244 [463/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:09.244 [464/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.244 [465/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:09.244 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:09.507 [467/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:09.507 [468/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:09.507 [469/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:09.507 [470/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:09.507 [471/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:09.507 [472/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:09.507 [473/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:09.507 [474/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.770 [475/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:09.770 [476/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:09.770 [477/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:09.770 [478/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.770 [479/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:10.033 [480/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:10.033 [481/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:10.033 [482/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.033 [483/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:10.033 [484/723] Linking target lib/librte_eal.so.24.2 00:01:10.033 [485/723] Linking static target drivers/librte_mempool_ring.a 00:01:10.033 [486/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.033 [487/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.033 [488/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:10.293 [489/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:10.293 [490/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:10.293 [491/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:10.293 [492/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:10.293 [493/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:10.293 [494/723] Linking target lib/librte_ring.so.24.2 00:01:10.293 [495/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:10.293 [496/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:10.293 [497/723] Linking target lib/librte_meter.so.24.2 00:01:10.293 [498/723] Linking target lib/librte_pci.so.24.2 00:01:10.293 [499/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:10.552 [500/723] Linking target lib/librte_timer.so.24.2 00:01:10.552 [501/723] Linking target lib/librte_acl.so.24.2 00:01:10.552 [502/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:10.552 [503/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:10.552 [504/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:10.552 [505/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:10.552 [506/723] Linking target lib/librte_cfgfile.so.24.2 00:01:10.552 [507/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:10.552 [508/723] Linking target lib/librte_dmadev.so.24.2 00:01:10.552 [509/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:10.552 [510/723] Linking target lib/librte_jobstats.so.24.2 00:01:10.552 [511/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:10.552 [512/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:10.552 [513/723] Linking target lib/librte_rcu.so.24.2 00:01:10.552 [514/723] Linking target lib/librte_mempool.so.24.2 00:01:10.552 [515/723] Linking target lib/librte_rawdev.so.24.2 00:01:10.552 [516/723] Linking target lib/librte_stack.so.24.2 00:01:10.552 [517/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:10.814 [518/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:10.814 [519/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:10.814 [520/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:10.814 [521/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:10.814 [522/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:10.814 [523/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:10.814 [524/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:10.814 [525/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:10.814 [526/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:10.814 [527/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:10.814 [528/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:10.814 [529/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:10.814 [530/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:10.814 [531/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:10.814 [532/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:11.078 [533/723] Linking target lib/librte_rib.so.24.2 00:01:11.078 [534/723] Linking target lib/librte_mbuf.so.24.2 00:01:11.078 [535/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:11.078 [536/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:11.078 [537/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:11.078 [538/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:11.078 [539/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:11.078 [540/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:11.339 [541/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:11.339 [542/723] Linking target lib/librte_bbdev.so.24.2 00:01:11.339 [543/723] Linking target lib/librte_compressdev.so.24.2 00:01:11.339 [544/723] Linking target lib/librte_net.so.24.2 00:01:11.339 [545/723] Linking target lib/librte_distributor.so.24.2 00:01:11.339 [546/723] Linking target lib/librte_gpudev.so.24.2 00:01:11.339 [547/723] Linking target lib/librte_cryptodev.so.24.2 00:01:11.339 [548/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:11.339 [549/723] Linking target lib/librte_regexdev.so.24.2 00:01:11.339 [550/723] Linking target lib/librte_mldev.so.24.2 00:01:11.339 [551/723] Linking target lib/librte_reorder.so.24.2 00:01:11.608 [552/723] Linking target lib/librte_sched.so.24.2 00:01:11.608 [553/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:11.608 [554/723] Linking target lib/librte_fib.so.24.2 00:01:11.608 [555/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:11.608 [556/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:11.608 [557/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:11.608 [558/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:11.608 [559/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:11.608 [560/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:11.608 [561/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:11.608 [562/723] Linking target lib/librte_cmdline.so.24.2 00:01:11.608 [563/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:11.608 [564/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:11.608 [565/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:11.608 [566/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:11.608 [567/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:11.608 [568/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:11.608 [569/723] Linking target lib/librte_hash.so.24.2 00:01:11.608 [570/723] Linking target lib/librte_security.so.24.2 00:01:11.608 [571/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:11.608 [572/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:11.608 [573/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:11.608 [574/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:11.880 [575/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:11.880 [576/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:11.880 [577/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:11.880 [578/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:11.880 [579/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:11.880 [580/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:11.880 [581/723] Linking target lib/librte_efd.so.24.2 00:01:11.880 [582/723] Linking target lib/librte_lpm.so.24.2 00:01:11.880 [583/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:12.144 [584/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:12.144 [585/723] Linking target lib/librte_member.so.24.2 00:01:12.144 [586/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:12.144 [587/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:12.144 [588/723] Linking target lib/librte_ipsec.so.24.2 00:01:12.144 [589/723] Linking target lib/librte_pdcp.so.24.2 00:01:12.144 [590/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:12.144 [591/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:12.403 [592/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:12.403 [593/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:12.403 [594/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:12.403 [595/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:12.403 [596/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:12.403 [597/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:12.666 [598/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:12.666 [599/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:12.666 [600/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:12.666 [601/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:12.927 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:12.927 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:12.927 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:12.927 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:12.927 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:12.927 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:12.927 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:13.185 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:13.185 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:13.185 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:13.185 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:13.185 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:13.185 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:13.185 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:13.185 [616/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:13.449 [617/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:13.449 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:13.449 [619/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:13.449 [620/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:13.449 [621/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:13.708 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:13.708 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:13.708 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:13.708 [625/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:13.966 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:13.966 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:13.966 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:13.966 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:13.966 [630/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:13.966 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:14.223 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:14.223 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:14.223 [634/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:14.223 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:14.223 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:14.223 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:14.223 [638/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:14.223 [639/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.223 [640/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:14.480 [641/723] Linking target lib/librte_ethdev.so.24.2 00:01:14.480 [642/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:14.480 [643/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:14.480 [644/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:14.480 [645/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:14.480 [646/723] Linking target lib/librte_metrics.so.24.2 00:01:14.480 [647/723] Linking target lib/librte_gso.so.24.2 00:01:14.480 [648/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:14.480 [649/723] Linking target lib/librte_pcapng.so.24.2 00:01:14.480 [650/723] Linking target lib/librte_gro.so.24.2 00:01:14.480 [651/723] Linking target lib/librte_ip_frag.so.24.2 00:01:14.480 [652/723] Linking target lib/librte_power.so.24.2 00:01:14.480 [653/723] Linking target lib/librte_eventdev.so.24.2 00:01:14.480 [654/723] Linking target lib/librte_bpf.so.24.2 00:01:14.738 [655/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:14.738 [656/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:14.738 [657/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:14.738 [658/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:14.738 [659/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:14.738 [660/723] Linking target lib/librte_bitratestats.so.24.2 00:01:14.738 [661/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:14.738 [662/723] Linking target lib/librte_latencystats.so.24.2 00:01:14.738 [663/723] Linking target lib/librte_pdump.so.24.2 00:01:14.738 [664/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:14.738 [665/723] Linking target lib/librte_graph.so.24.2 00:01:14.738 [666/723] Linking target lib/librte_dispatcher.so.24.2 00:01:14.738 [667/723] Linking target lib/librte_port.so.24.2 00:01:14.738 [668/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:14.995 [669/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:14.995 [670/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:14.995 [671/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:14.995 [672/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:14.995 [673/723] Linking target lib/librte_node.so.24.2 00:01:14.995 [674/723] Linking target lib/librte_table.so.24.2 00:01:14.995 [675/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:15.264 [676/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:15.264 [677/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:15.264 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:15.264 [679/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.831 [680/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:15.831 [681/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:15.831 [682/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:16.089 [683/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:16.089 [684/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:16.089 [685/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:16.089 [686/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:16.089 [687/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:16.089 [688/723] Linking static target drivers/librte_net_i40e.a 00:01:16.654 [689/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:16.654 [690/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.911 [691/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:16.911 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:17.845 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:17.845 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:18.103 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:26.209 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:26.209 [697/723] Linking static target lib/librte_pipeline.a 00:01:26.209 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:26.209 [699/723] Linking static target lib/librte_vhost.a 00:01:26.774 [700/723] Linking target app/dpdk-test-acl 00:01:26.774 [701/723] Linking target app/dpdk-dumpcap 00:01:26.774 [702/723] Linking target app/dpdk-test-fib 00:01:26.774 [703/723] Linking target app/dpdk-test-regex 00:01:26.774 [704/723] Linking target app/dpdk-graph 00:01:26.774 [705/723] Linking target app/dpdk-test-gpudev 00:01:26.774 [706/723] Linking target app/dpdk-pdump 00:01:26.774 [707/723] Linking target app/dpdk-test-dma-perf 00:01:26.774 [708/723] Linking target app/dpdk-test-mldev 00:01:26.774 [709/723] Linking target app/dpdk-test-sad 00:01:26.774 [710/723] Linking target app/dpdk-test-flow-perf 00:01:26.774 [711/723] Linking target app/dpdk-test-pipeline 00:01:26.774 [712/723] Linking target app/dpdk-test-bbdev 00:01:26.774 [713/723] Linking target app/dpdk-test-security-perf 00:01:26.774 [714/723] Linking target app/dpdk-test-eventdev 00:01:26.774 [715/723] Linking target app/dpdk-test-crypto-perf 00:01:26.774 [716/723] Linking target app/dpdk-proc-info 00:01:26.774 [717/723] Linking target app/dpdk-test-cmdline 00:01:26.774 [718/723] Linking target app/dpdk-test-compress-perf 00:01:26.774 [719/723] Linking target app/dpdk-testpmd 00:01:27.341 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.341 [721/723] Linking target lib/librte_vhost.so.24.2 00:01:28.712 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.712 [723/723] Linking target lib/librte_pipeline.so.24.2 00:01:28.712 11:56:36 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:28.712 11:56:36 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:28.712 11:56:36 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:01:28.712 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:28.712 [0/1] Installing files. 00:01:28.973 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:28.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:28.974 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:28.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:28.977 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.978 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:28.979 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.979 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.238 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:29.238 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.239 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:29.239 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.239 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:29.239 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.239 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:29.239 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.503 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.504 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.505 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:29.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:29.506 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:29.506 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:29.506 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:29.506 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:29.506 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:01:29.506 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:01:29.506 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:29.506 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:29.506 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:29.506 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:29.506 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:29.506 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:29.506 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:29.506 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:29.506 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:29.506 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:29.506 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:29.506 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:29.506 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:29.506 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:29.506 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:29.506 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:29.506 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:29.506 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:29.506 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:29.506 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:29.506 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:29.506 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:29.506 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:29.506 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:29.506 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:29.506 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:29.506 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:29.506 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:29.506 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:29.506 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:29.506 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:29.506 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:29.506 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:29.506 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:29.506 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:29.506 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:29.506 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:29.506 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:29.507 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:29.507 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:29.507 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:29.507 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:29.507 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:29.507 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:29.507 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:29.507 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:29.507 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:29.507 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:29.507 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:29.507 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:29.507 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:29.507 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:29.507 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:29.507 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:29.507 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:29.507 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:29.507 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:29.507 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:29.507 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:29.507 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:29.507 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:29.507 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:29.507 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:29.507 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:29.507 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:29.507 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:29.507 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:29.507 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:29.507 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:29.507 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:29.507 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:29.507 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:29.507 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:29.507 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:29.507 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:29.507 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:29.507 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:29.507 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:29.507 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:29.507 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:29.507 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:29.507 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:29.507 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:29.507 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:29.507 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:29.507 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:29.507 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:29.507 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:29.507 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:29.507 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:29.507 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:29.507 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:29.507 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:29.507 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:29.507 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:29.507 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:29.507 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:29.507 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:29.507 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:29.507 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:29.507 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:29.507 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:29.507 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:29.507 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:29.507 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:29.507 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:29.507 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:29.507 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:29.507 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:01:29.507 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:01:29.507 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:01:29.507 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:01:29.507 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:01:29.507 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:01:29.507 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:01:29.507 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:01:29.507 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:01:29.507 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:01:29.507 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:01:29.507 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:01:29.507 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:01:29.507 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:01:29.508 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:01:29.508 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:01:29.508 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:01:29.508 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:01:29.508 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:01:29.508 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:01:29.508 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:01:29.508 11:56:37 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:01:29.508 11:56:37 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.508 00:01:29.508 real 0m39.592s 00:01:29.508 user 13m53.744s 00:01:29.508 sys 2m1.122s 00:01:29.508 11:56:37 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:29.508 11:56:37 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:29.508 ************************************ 00:01:29.508 END TEST build_native_dpdk 00:01:29.508 ************************************ 00:01:29.508 11:56:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:29.508 11:56:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.508 11:56:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.508 11:56:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.508 11:56:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.508 11:56:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.508 11:56:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.508 11:56:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.508 11:56:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:29.508 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:29.798 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:29.798 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:29.798 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:30.056 Using 'verbs' RDMA provider 00:01:40.631 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:50.615 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:50.615 Creating mk/config.mk...done. 00:01:50.615 Creating mk/cc.flags.mk...done. 00:01:50.615 Type 'make' to build. 00:01:50.615 11:56:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:50.615 11:56:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:50.615 11:56:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:50.615 11:56:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.615 ************************************ 00:01:50.615 START TEST make 00:01:50.615 ************************************ 00:01:50.615 11:56:56 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:50.615 make[1]: Nothing to be done for 'all'. 00:01:50.879 The Meson build system 00:01:50.879 Version: 1.3.1 00:01:50.879 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:50.879 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.879 Build type: native build 00:01:50.879 Project name: libvfio-user 00:01:50.879 Project version: 0.0.1 00:01:50.879 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.879 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:50.879 Host machine cpu family: x86_64 00:01:50.879 Host machine cpu: x86_64 00:01:50.879 Run-time dependency threads found: YES 00:01:50.879 Library dl found: YES 00:01:50.880 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.880 Run-time dependency json-c found: YES 0.17 00:01:50.880 Run-time dependency cmocka found: YES 1.1.7 00:01:50.880 Program pytest-3 found: NO 00:01:50.880 Program flake8 found: NO 00:01:50.880 Program misspell-fixer found: NO 00:01:50.880 Program restructuredtext-lint found: NO 00:01:50.880 Program valgrind found: YES (/usr/bin/valgrind) 00:01:50.880 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.880 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.880 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.880 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.880 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:50.880 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:50.880 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.880 Build targets in project: 8 00:01:50.880 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:50.880 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:50.880 00:01:50.880 libvfio-user 0.0.1 00:01:50.880 00:01:50.880 User defined options 00:01:50.880 buildtype : debug 00:01:50.880 default_library: shared 00:01:50.880 libdir : /usr/local/lib 00:01:50.880 00:01:50.880 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.829 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.829 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:51.829 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:51.829 [3/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:51.829 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:51.829 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:52.093 [6/37] Compiling C object samples/null.p/null.c.o 00:01:52.093 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:52.093 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:52.093 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:52.093 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:52.093 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:52.093 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:52.093 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:52.093 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:52.093 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:52.093 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:52.093 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:52.093 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:52.093 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:52.093 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:52.093 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:52.093 [22/37] Compiling C object samples/server.p/server.c.o 00:01:52.093 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:52.093 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:52.093 [25/37] Compiling C object samples/client.p/client.c.o 00:01:52.093 [26/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:52.093 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:52.093 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:52.093 [29/37] Linking target samples/client 00:01:52.353 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:52.353 [31/37] Linking target test/unit_tests 00:01:52.353 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:52.353 [33/37] Linking target samples/lspci 00:01:52.353 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:52.617 [35/37] Linking target samples/null 00:01:52.617 [36/37] Linking target samples/gpio-pci-idio-16 00:01:52.617 [37/37] Linking target samples/server 00:01:52.617 INFO: autodetecting backend as ninja 00:01:52.617 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.617 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.187 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.447 ninja: no work to do. 00:02:05.691 CC lib/ut_mock/mock.o 00:02:05.691 CC lib/ut/ut.o 00:02:05.691 CC lib/log/log.o 00:02:05.691 CC lib/log/log_flags.o 00:02:05.691 CC lib/log/log_deprecated.o 00:02:05.691 LIB libspdk_ut.a 00:02:05.691 LIB libspdk_ut_mock.a 00:02:05.691 LIB libspdk_log.a 00:02:05.691 SO libspdk_ut.so.2.0 00:02:05.691 SO libspdk_ut_mock.so.6.0 00:02:05.691 SO libspdk_log.so.7.0 00:02:05.691 SYMLINK libspdk_ut.so 00:02:05.691 SYMLINK libspdk_ut_mock.so 00:02:05.691 SYMLINK libspdk_log.so 00:02:05.964 CXX lib/trace_parser/trace.o 00:02:05.964 CC lib/util/base64.o 00:02:05.964 CC lib/dma/dma.o 00:02:05.964 CC lib/util/bit_array.o 00:02:05.964 CC lib/util/cpuset.o 00:02:05.964 CC lib/ioat/ioat.o 00:02:05.964 CC lib/util/crc16.o 00:02:05.964 CC lib/util/crc32.o 00:02:05.964 CC lib/util/crc32c.o 00:02:05.964 CC lib/util/crc32_ieee.o 00:02:05.964 CC lib/util/crc64.o 00:02:05.964 CC lib/util/dif.o 00:02:05.964 CC lib/util/fd.o 00:02:05.964 CC lib/util/fd_group.o 00:02:05.964 CC lib/util/file.o 00:02:05.964 CC lib/util/hexlify.o 00:02:05.964 CC lib/util/iov.o 00:02:05.964 CC lib/util/math.o 00:02:05.964 CC lib/util/net.o 00:02:05.964 CC lib/util/pipe.o 00:02:05.964 CC lib/util/strerror_tls.o 00:02:05.964 CC lib/util/string.o 00:02:05.964 CC lib/util/uuid.o 00:02:05.964 CC lib/util/xor.o 00:02:05.964 CC lib/util/zipf.o 00:02:05.964 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.964 CC lib/vfio_user/host/vfio_user.o 00:02:05.964 LIB libspdk_dma.a 00:02:05.964 SO libspdk_dma.so.4.0 00:02:06.221 SYMLINK libspdk_dma.so 00:02:06.221 LIB libspdk_ioat.a 00:02:06.221 SO libspdk_ioat.so.7.0 00:02:06.221 LIB libspdk_vfio_user.a 00:02:06.221 SYMLINK libspdk_ioat.so 00:02:06.221 SO libspdk_vfio_user.so.5.0 00:02:06.221 SYMLINK libspdk_vfio_user.so 00:02:06.478 LIB libspdk_util.a 00:02:06.478 SO libspdk_util.so.10.0 00:02:06.478 SYMLINK libspdk_util.so 00:02:06.735 CC lib/vmd/vmd.o 00:02:06.735 CC lib/conf/conf.o 00:02:06.735 CC lib/rdma_utils/rdma_utils.o 00:02:06.735 CC lib/rdma_provider/common.o 00:02:06.735 CC lib/env_dpdk/env.o 00:02:06.735 CC lib/idxd/idxd.o 00:02:06.735 CC lib/json/json_parse.o 00:02:06.735 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.735 CC lib/vmd/led.o 00:02:06.735 CC lib/json/json_util.o 00:02:06.735 CC lib/env_dpdk/memory.o 00:02:06.735 CC lib/idxd/idxd_user.o 00:02:06.735 CC lib/json/json_write.o 00:02:06.735 CC lib/env_dpdk/pci.o 00:02:06.735 CC lib/idxd/idxd_kernel.o 00:02:06.735 CC lib/env_dpdk/init.o 00:02:06.735 CC lib/env_dpdk/threads.o 00:02:06.735 CC lib/env_dpdk/pci_ioat.o 00:02:06.735 CC lib/env_dpdk/pci_virtio.o 00:02:06.735 CC lib/env_dpdk/pci_vmd.o 00:02:06.735 CC lib/env_dpdk/pci_idxd.o 00:02:06.735 CC lib/env_dpdk/pci_event.o 00:02:06.735 CC lib/env_dpdk/sigbus_handler.o 00:02:06.735 CC lib/env_dpdk/pci_dpdk.o 00:02:06.735 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.735 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.735 LIB libspdk_trace_parser.a 00:02:06.735 SO libspdk_trace_parser.so.5.0 00:02:06.992 SYMLINK libspdk_trace_parser.so 00:02:06.992 LIB libspdk_rdma_provider.a 00:02:06.992 SO libspdk_rdma_provider.so.6.0 00:02:06.992 LIB libspdk_conf.a 00:02:06.992 SO libspdk_conf.so.6.0 00:02:06.992 SYMLINK libspdk_rdma_provider.so 00:02:06.992 LIB libspdk_json.a 00:02:06.992 SYMLINK libspdk_conf.so 00:02:06.992 SO libspdk_json.so.6.0 00:02:07.250 LIB libspdk_rdma_utils.a 00:02:07.250 SO libspdk_rdma_utils.so.1.0 00:02:07.251 SYMLINK libspdk_json.so 00:02:07.251 SYMLINK libspdk_rdma_utils.so 00:02:07.251 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.251 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.251 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.251 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:07.251 LIB libspdk_idxd.a 00:02:07.251 SO libspdk_idxd.so.12.0 00:02:07.509 SYMLINK libspdk_idxd.so 00:02:07.509 LIB libspdk_vmd.a 00:02:07.509 SO libspdk_vmd.so.6.0 00:02:07.509 SYMLINK libspdk_vmd.so 00:02:07.509 LIB libspdk_jsonrpc.a 00:02:07.509 SO libspdk_jsonrpc.so.6.0 00:02:07.767 SYMLINK libspdk_jsonrpc.so 00:02:07.767 CC lib/rpc/rpc.o 00:02:08.025 LIB libspdk_rpc.a 00:02:08.025 SO libspdk_rpc.so.6.0 00:02:08.025 SYMLINK libspdk_rpc.so 00:02:08.283 LIB libspdk_env_dpdk.a 00:02:08.283 SO libspdk_env_dpdk.so.14.1 00:02:08.283 CC lib/trace/trace.o 00:02:08.283 CC lib/trace/trace_flags.o 00:02:08.283 CC lib/keyring/keyring.o 00:02:08.283 CC lib/trace/trace_rpc.o 00:02:08.283 CC lib/keyring/keyring_rpc.o 00:02:08.283 CC lib/notify/notify.o 00:02:08.283 CC lib/notify/notify_rpc.o 00:02:08.541 SYMLINK libspdk_env_dpdk.so 00:02:08.541 LIB libspdk_notify.a 00:02:08.541 SO libspdk_notify.so.6.0 00:02:08.541 LIB libspdk_keyring.a 00:02:08.541 SYMLINK libspdk_notify.so 00:02:08.541 LIB libspdk_trace.a 00:02:08.541 SO libspdk_keyring.so.1.0 00:02:08.541 SO libspdk_trace.so.10.0 00:02:08.541 SYMLINK libspdk_keyring.so 00:02:08.541 SYMLINK libspdk_trace.so 00:02:08.799 CC lib/sock/sock.o 00:02:08.799 CC lib/sock/sock_rpc.o 00:02:08.799 CC lib/thread/thread.o 00:02:08.799 CC lib/thread/iobuf.o 00:02:09.365 LIB libspdk_sock.a 00:02:09.365 SO libspdk_sock.so.10.0 00:02:09.365 SYMLINK libspdk_sock.so 00:02:09.365 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:09.365 CC lib/nvme/nvme_ctrlr.o 00:02:09.365 CC lib/nvme/nvme_fabric.o 00:02:09.365 CC lib/nvme/nvme_ns_cmd.o 00:02:09.365 CC lib/nvme/nvme_ns.o 00:02:09.365 CC lib/nvme/nvme_pcie_common.o 00:02:09.365 CC lib/nvme/nvme_pcie.o 00:02:09.365 CC lib/nvme/nvme_qpair.o 00:02:09.624 CC lib/nvme/nvme.o 00:02:09.624 CC lib/nvme/nvme_quirks.o 00:02:09.624 CC lib/nvme/nvme_transport.o 00:02:09.624 CC lib/nvme/nvme_discovery.o 00:02:09.624 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.624 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.624 CC lib/nvme/nvme_tcp.o 00:02:09.624 CC lib/nvme/nvme_opal.o 00:02:09.624 CC lib/nvme/nvme_io_msg.o 00:02:09.624 CC lib/nvme/nvme_poll_group.o 00:02:09.624 CC lib/nvme/nvme_zns.o 00:02:09.624 CC lib/nvme/nvme_stubs.o 00:02:09.624 CC lib/nvme/nvme_auth.o 00:02:09.624 CC lib/nvme/nvme_cuse.o 00:02:09.624 CC lib/nvme/nvme_vfio_user.o 00:02:09.624 CC lib/nvme/nvme_rdma.o 00:02:10.559 LIB libspdk_thread.a 00:02:10.559 SO libspdk_thread.so.10.1 00:02:10.559 SYMLINK libspdk_thread.so 00:02:10.559 CC lib/init/json_config.o 00:02:10.559 CC lib/vfu_tgt/tgt_endpoint.o 00:02:10.559 CC lib/init/subsystem.o 00:02:10.559 CC lib/accel/accel.o 00:02:10.559 CC lib/virtio/virtio.o 00:02:10.559 CC lib/accel/accel_rpc.o 00:02:10.559 CC lib/init/subsystem_rpc.o 00:02:10.559 CC lib/vfu_tgt/tgt_rpc.o 00:02:10.559 CC lib/virtio/virtio_vhost_user.o 00:02:10.559 CC lib/accel/accel_sw.o 00:02:10.559 CC lib/virtio/virtio_vfio_user.o 00:02:10.559 CC lib/init/rpc.o 00:02:10.559 CC lib/virtio/virtio_pci.o 00:02:10.559 CC lib/blob/blobstore.o 00:02:10.559 CC lib/blob/request.o 00:02:10.559 CC lib/blob/zeroes.o 00:02:10.559 CC lib/blob/blob_bs_dev.o 00:02:10.816 LIB libspdk_init.a 00:02:10.816 SO libspdk_init.so.5.0 00:02:11.073 LIB libspdk_virtio.a 00:02:11.073 LIB libspdk_vfu_tgt.a 00:02:11.073 SYMLINK libspdk_init.so 00:02:11.073 SO libspdk_vfu_tgt.so.3.0 00:02:11.073 SO libspdk_virtio.so.7.0 00:02:11.073 SYMLINK libspdk_vfu_tgt.so 00:02:11.073 SYMLINK libspdk_virtio.so 00:02:11.073 CC lib/event/app.o 00:02:11.073 CC lib/event/reactor.o 00:02:11.073 CC lib/event/log_rpc.o 00:02:11.073 CC lib/event/app_rpc.o 00:02:11.073 CC lib/event/scheduler_static.o 00:02:11.637 LIB libspdk_event.a 00:02:11.637 SO libspdk_event.so.14.0 00:02:11.637 LIB libspdk_accel.a 00:02:11.637 SYMLINK libspdk_event.so 00:02:11.637 SO libspdk_accel.so.16.0 00:02:11.894 SYMLINK libspdk_accel.so 00:02:11.894 LIB libspdk_nvme.a 00:02:11.894 CC lib/bdev/bdev.o 00:02:11.894 CC lib/bdev/bdev_rpc.o 00:02:11.894 CC lib/bdev/bdev_zone.o 00:02:11.894 CC lib/bdev/part.o 00:02:11.894 CC lib/bdev/scsi_nvme.o 00:02:12.152 SO libspdk_nvme.so.13.1 00:02:12.408 SYMLINK libspdk_nvme.so 00:02:13.780 LIB libspdk_blob.a 00:02:13.780 SO libspdk_blob.so.11.0 00:02:13.780 SYMLINK libspdk_blob.so 00:02:14.038 CC lib/lvol/lvol.o 00:02:14.038 CC lib/blobfs/blobfs.o 00:02:14.038 CC lib/blobfs/tree.o 00:02:14.604 LIB libspdk_bdev.a 00:02:14.604 SO libspdk_bdev.so.16.0 00:02:14.604 SYMLINK libspdk_bdev.so 00:02:14.604 LIB libspdk_blobfs.a 00:02:14.866 SO libspdk_blobfs.so.10.0 00:02:14.866 CC lib/ublk/ublk.o 00:02:14.866 CC lib/ublk/ublk_rpc.o 00:02:14.866 CC lib/ftl/ftl_core.o 00:02:14.866 CC lib/nvmf/ctrlr.o 00:02:14.866 CC lib/nbd/nbd.o 00:02:14.866 CC lib/scsi/dev.o 00:02:14.866 CC lib/ftl/ftl_init.o 00:02:14.866 CC lib/nvmf/ctrlr_discovery.o 00:02:14.866 CC lib/scsi/lun.o 00:02:14.866 CC lib/nbd/nbd_rpc.o 00:02:14.866 CC lib/ftl/ftl_layout.o 00:02:14.866 CC lib/nvmf/ctrlr_bdev.o 00:02:14.866 CC lib/scsi/port.o 00:02:14.866 CC lib/ftl/ftl_debug.o 00:02:14.866 CC lib/nvmf/subsystem.o 00:02:14.866 CC lib/scsi/scsi.o 00:02:14.866 CC lib/ftl/ftl_io.o 00:02:14.867 CC lib/scsi/scsi_bdev.o 00:02:14.867 CC lib/ftl/ftl_sb.o 00:02:14.867 CC lib/nvmf/nvmf.o 00:02:14.867 CC lib/scsi/scsi_pr.o 00:02:14.867 CC lib/nvmf/nvmf_rpc.o 00:02:14.867 CC lib/scsi/scsi_rpc.o 00:02:14.867 CC lib/ftl/ftl_l2p.o 00:02:14.867 CC lib/ftl/ftl_l2p_flat.o 00:02:14.867 CC lib/nvmf/transport.o 00:02:14.867 CC lib/scsi/task.o 00:02:14.867 CC lib/nvmf/tcp.o 00:02:14.867 CC lib/ftl/ftl_nv_cache.o 00:02:14.867 CC lib/ftl/ftl_band.o 00:02:14.867 CC lib/nvmf/stubs.o 00:02:14.867 CC lib/ftl/ftl_band_ops.o 00:02:14.867 CC lib/nvmf/mdns_server.o 00:02:14.867 CC lib/nvmf/vfio_user.o 00:02:14.867 CC lib/ftl/ftl_writer.o 00:02:14.867 CC lib/ftl/ftl_rq.o 00:02:14.867 CC lib/nvmf/rdma.o 00:02:14.867 CC lib/ftl/ftl_reloc.o 00:02:14.867 CC lib/nvmf/auth.o 00:02:14.867 CC lib/ftl/ftl_l2p_cache.o 00:02:14.867 CC lib/ftl/ftl_p2l.o 00:02:14.867 CC lib/ftl/mngt/ftl_mngt.o 00:02:14.867 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:14.867 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:14.867 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:14.867 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:14.867 LIB libspdk_lvol.a 00:02:14.867 SYMLINK libspdk_blobfs.so 00:02:14.867 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:14.867 SO libspdk_lvol.so.10.0 00:02:15.125 SYMLINK libspdk_lvol.so 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.125 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.125 CC lib/ftl/utils/ftl_conf.o 00:02:15.125 CC lib/ftl/utils/ftl_md.o 00:02:15.125 CC lib/ftl/utils/ftl_mempool.o 00:02:15.125 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.125 CC lib/ftl/utils/ftl_property.o 00:02:15.396 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.396 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.396 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.396 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.396 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.396 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.396 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.397 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.397 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.397 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.397 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.397 CC lib/ftl/base/ftl_base_dev.o 00:02:15.397 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.397 CC lib/ftl/ftl_trace.o 00:02:15.653 LIB libspdk_nbd.a 00:02:15.653 SO libspdk_nbd.so.7.0 00:02:15.653 SYMLINK libspdk_nbd.so 00:02:15.653 LIB libspdk_scsi.a 00:02:15.653 SO libspdk_scsi.so.9.0 00:02:15.910 SYMLINK libspdk_scsi.so 00:02:15.910 LIB libspdk_ublk.a 00:02:15.910 SO libspdk_ublk.so.3.0 00:02:15.910 SYMLINK libspdk_ublk.so 00:02:15.910 CC lib/vhost/vhost.o 00:02:15.910 CC lib/iscsi/conn.o 00:02:15.910 CC lib/iscsi/init_grp.o 00:02:15.910 CC lib/vhost/vhost_rpc.o 00:02:15.910 CC lib/iscsi/iscsi.o 00:02:15.910 CC lib/vhost/vhost_scsi.o 00:02:15.910 CC lib/vhost/vhost_blk.o 00:02:15.910 CC lib/iscsi/md5.o 00:02:15.910 CC lib/iscsi/param.o 00:02:15.910 CC lib/vhost/rte_vhost_user.o 00:02:15.910 CC lib/iscsi/portal_grp.o 00:02:15.910 CC lib/iscsi/tgt_node.o 00:02:15.910 CC lib/iscsi/iscsi_subsystem.o 00:02:15.910 CC lib/iscsi/iscsi_rpc.o 00:02:15.910 CC lib/iscsi/task.o 00:02:16.475 LIB libspdk_ftl.a 00:02:16.475 SO libspdk_ftl.so.9.0 00:02:16.733 SYMLINK libspdk_ftl.so 00:02:17.299 LIB libspdk_vhost.a 00:02:17.299 SO libspdk_vhost.so.8.0 00:02:17.299 LIB libspdk_nvmf.a 00:02:17.299 SYMLINK libspdk_vhost.so 00:02:17.299 SO libspdk_nvmf.so.19.0 00:02:17.557 LIB libspdk_iscsi.a 00:02:17.557 SO libspdk_iscsi.so.8.0 00:02:17.557 SYMLINK libspdk_nvmf.so 00:02:17.557 SYMLINK libspdk_iscsi.so 00:02:17.815 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.815 CC module/vfu_device/vfu_virtio.o 00:02:17.815 CC module/vfu_device/vfu_virtio_blk.o 00:02:17.815 CC module/vfu_device/vfu_virtio_scsi.o 00:02:17.815 CC module/vfu_device/vfu_virtio_rpc.o 00:02:18.073 CC module/keyring/file/keyring.o 00:02:18.073 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.073 CC module/accel/iaa/accel_iaa.o 00:02:18.073 CC module/blob/bdev/blob_bdev.o 00:02:18.073 CC module/accel/dsa/accel_dsa.o 00:02:18.073 CC module/sock/posix/posix.o 00:02:18.073 CC module/keyring/file/keyring_rpc.o 00:02:18.073 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.073 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.073 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.073 CC module/keyring/linux/keyring.o 00:02:18.073 CC module/accel/ioat/accel_ioat.o 00:02:18.073 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.073 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.073 CC module/keyring/linux/keyring_rpc.o 00:02:18.073 CC module/accel/error/accel_error.o 00:02:18.073 CC module/accel/error/accel_error_rpc.o 00:02:18.073 LIB libspdk_env_dpdk_rpc.a 00:02:18.073 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.073 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.073 LIB libspdk_keyring_linux.a 00:02:18.073 LIB libspdk_keyring_file.a 00:02:18.073 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.073 LIB libspdk_scheduler_gscheduler.a 00:02:18.073 SO libspdk_keyring_linux.so.1.0 00:02:18.352 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.352 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.352 SO libspdk_keyring_file.so.1.0 00:02:18.352 LIB libspdk_accel_ioat.a 00:02:18.352 LIB libspdk_scheduler_dynamic.a 00:02:18.352 LIB libspdk_accel_error.a 00:02:18.352 LIB libspdk_accel_iaa.a 00:02:18.352 SO libspdk_accel_ioat.so.6.0 00:02:18.352 SYMLINK libspdk_keyring_linux.so 00:02:18.352 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.352 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.352 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.352 SO libspdk_accel_error.so.2.0 00:02:18.352 SYMLINK libspdk_keyring_file.so 00:02:18.352 SO libspdk_accel_iaa.so.3.0 00:02:18.352 LIB libspdk_accel_dsa.a 00:02:18.352 SYMLINK libspdk_accel_ioat.so 00:02:18.352 LIB libspdk_blob_bdev.a 00:02:18.352 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.352 SYMLINK libspdk_accel_error.so 00:02:18.352 SO libspdk_accel_dsa.so.5.0 00:02:18.352 SYMLINK libspdk_accel_iaa.so 00:02:18.352 SO libspdk_blob_bdev.so.11.0 00:02:18.352 SYMLINK libspdk_accel_dsa.so 00:02:18.352 SYMLINK libspdk_blob_bdev.so 00:02:18.630 LIB libspdk_vfu_device.a 00:02:18.630 SO libspdk_vfu_device.so.3.0 00:02:18.630 CC module/bdev/gpt/gpt.o 00:02:18.630 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.630 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.630 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.630 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.630 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.630 CC module/bdev/nvme/bdev_nvme.o 00:02:18.630 CC module/bdev/error/vbdev_error.o 00:02:18.630 CC module/bdev/split/vbdev_split.o 00:02:18.630 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.630 CC module/bdev/delay/vbdev_delay.o 00:02:18.630 CC module/bdev/malloc/bdev_malloc.o 00:02:18.630 CC module/bdev/nvme/nvme_rpc.o 00:02:18.630 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.630 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.630 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.630 CC module/bdev/aio/bdev_aio.o 00:02:18.630 CC module/bdev/raid/bdev_raid.o 00:02:18.630 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.630 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.630 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.630 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.630 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.630 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.630 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.630 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.630 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.630 CC module/bdev/ftl/bdev_ftl.o 00:02:18.630 CC module/bdev/raid/raid0.o 00:02:18.630 CC module/bdev/nvme/vbdev_opal.o 00:02:18.630 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.630 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.630 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.630 CC module/bdev/raid/raid1.o 00:02:18.630 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.630 CC module/bdev/null/bdev_null.o 00:02:18.630 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.630 CC module/bdev/raid/concat.o 00:02:18.630 CC module/bdev/null/bdev_null_rpc.o 00:02:18.630 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.630 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.630 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.907 SYMLINK libspdk_vfu_device.so 00:02:18.907 LIB libspdk_sock_posix.a 00:02:18.907 SO libspdk_sock_posix.so.6.0 00:02:18.907 SYMLINK libspdk_sock_posix.so 00:02:18.907 LIB libspdk_blobfs_bdev.a 00:02:19.165 SO libspdk_blobfs_bdev.so.6.0 00:02:19.165 LIB libspdk_bdev_split.a 00:02:19.165 SO libspdk_bdev_split.so.6.0 00:02:19.165 SYMLINK libspdk_blobfs_bdev.so 00:02:19.165 LIB libspdk_bdev_gpt.a 00:02:19.165 LIB libspdk_bdev_ftl.a 00:02:19.165 SYMLINK libspdk_bdev_split.so 00:02:19.165 SO libspdk_bdev_gpt.so.6.0 00:02:19.165 LIB libspdk_bdev_null.a 00:02:19.165 SO libspdk_bdev_ftl.so.6.0 00:02:19.165 LIB libspdk_bdev_error.a 00:02:19.166 SO libspdk_bdev_null.so.6.0 00:02:19.166 SO libspdk_bdev_error.so.6.0 00:02:19.166 SYMLINK libspdk_bdev_gpt.so 00:02:19.166 SYMLINK libspdk_bdev_ftl.so 00:02:19.166 LIB libspdk_bdev_passthru.a 00:02:19.166 LIB libspdk_bdev_iscsi.a 00:02:19.166 LIB libspdk_bdev_aio.a 00:02:19.166 SYMLINK libspdk_bdev_null.so 00:02:19.166 LIB libspdk_bdev_zone_block.a 00:02:19.166 SO libspdk_bdev_iscsi.so.6.0 00:02:19.166 SO libspdk_bdev_passthru.so.6.0 00:02:19.166 SO libspdk_bdev_aio.so.6.0 00:02:19.166 SYMLINK libspdk_bdev_error.so 00:02:19.166 SO libspdk_bdev_zone_block.so.6.0 00:02:19.166 LIB libspdk_bdev_malloc.a 00:02:19.166 LIB libspdk_bdev_delay.a 00:02:19.423 SYMLINK libspdk_bdev_passthru.so 00:02:19.423 SYMLINK libspdk_bdev_aio.so 00:02:19.423 SYMLINK libspdk_bdev_iscsi.so 00:02:19.423 SO libspdk_bdev_malloc.so.6.0 00:02:19.423 SO libspdk_bdev_delay.so.6.0 00:02:19.423 SYMLINK libspdk_bdev_zone_block.so 00:02:19.423 SYMLINK libspdk_bdev_malloc.so 00:02:19.423 SYMLINK libspdk_bdev_delay.so 00:02:19.423 LIB libspdk_bdev_virtio.a 00:02:19.423 LIB libspdk_bdev_lvol.a 00:02:19.423 SO libspdk_bdev_virtio.so.6.0 00:02:19.423 SO libspdk_bdev_lvol.so.6.0 00:02:19.423 SYMLINK libspdk_bdev_virtio.so 00:02:19.423 SYMLINK libspdk_bdev_lvol.so 00:02:19.988 LIB libspdk_bdev_raid.a 00:02:19.988 SO libspdk_bdev_raid.so.6.0 00:02:19.988 SYMLINK libspdk_bdev_raid.so 00:02:20.922 LIB libspdk_bdev_nvme.a 00:02:20.922 SO libspdk_bdev_nvme.so.7.0 00:02:21.179 SYMLINK libspdk_bdev_nvme.so 00:02:21.436 CC module/event/subsystems/keyring/keyring.o 00:02:21.436 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.436 CC module/event/subsystems/vmd/vmd.o 00:02:21.436 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:21.436 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.436 CC module/event/subsystems/sock/sock.o 00:02:21.436 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.436 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.436 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.693 LIB libspdk_event_keyring.a 00:02:21.693 LIB libspdk_event_vhost_blk.a 00:02:21.693 LIB libspdk_event_vfu_tgt.a 00:02:21.693 LIB libspdk_event_scheduler.a 00:02:21.693 LIB libspdk_event_vmd.a 00:02:21.693 LIB libspdk_event_sock.a 00:02:21.693 SO libspdk_event_keyring.so.1.0 00:02:21.693 LIB libspdk_event_iobuf.a 00:02:21.693 SO libspdk_event_vhost_blk.so.3.0 00:02:21.693 SO libspdk_event_scheduler.so.4.0 00:02:21.693 SO libspdk_event_vfu_tgt.so.3.0 00:02:21.693 SO libspdk_event_sock.so.5.0 00:02:21.693 SO libspdk_event_vmd.so.6.0 00:02:21.693 SO libspdk_event_iobuf.so.3.0 00:02:21.693 SYMLINK libspdk_event_keyring.so 00:02:21.693 SYMLINK libspdk_event_vhost_blk.so 00:02:21.693 SYMLINK libspdk_event_vfu_tgt.so 00:02:21.693 SYMLINK libspdk_event_scheduler.so 00:02:21.693 SYMLINK libspdk_event_sock.so 00:02:21.693 SYMLINK libspdk_event_vmd.so 00:02:21.693 SYMLINK libspdk_event_iobuf.so 00:02:21.950 CC module/event/subsystems/accel/accel.o 00:02:21.950 LIB libspdk_event_accel.a 00:02:22.207 SO libspdk_event_accel.so.6.0 00:02:22.207 SYMLINK libspdk_event_accel.so 00:02:22.207 CC module/event/subsystems/bdev/bdev.o 00:02:22.464 LIB libspdk_event_bdev.a 00:02:22.464 SO libspdk_event_bdev.so.6.0 00:02:22.464 SYMLINK libspdk_event_bdev.so 00:02:22.720 CC module/event/subsystems/ublk/ublk.o 00:02:22.720 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.721 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.721 CC module/event/subsystems/scsi/scsi.o 00:02:22.721 CC module/event/subsystems/nbd/nbd.o 00:02:22.977 LIB libspdk_event_ublk.a 00:02:22.977 LIB libspdk_event_nbd.a 00:02:22.977 LIB libspdk_event_scsi.a 00:02:22.977 SO libspdk_event_nbd.so.6.0 00:02:22.977 SO libspdk_event_ublk.so.3.0 00:02:22.977 SO libspdk_event_scsi.so.6.0 00:02:22.977 SYMLINK libspdk_event_ublk.so 00:02:22.977 SYMLINK libspdk_event_nbd.so 00:02:22.977 LIB libspdk_event_nvmf.a 00:02:22.977 SYMLINK libspdk_event_scsi.so 00:02:22.977 SO libspdk_event_nvmf.so.6.0 00:02:22.977 SYMLINK libspdk_event_nvmf.so 00:02:23.233 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.233 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.233 LIB libspdk_event_vhost_scsi.a 00:02:23.233 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.233 LIB libspdk_event_iscsi.a 00:02:23.233 SO libspdk_event_iscsi.so.6.0 00:02:23.490 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.490 SYMLINK libspdk_event_iscsi.so 00:02:23.490 SO libspdk.so.6.0 00:02:23.491 SYMLINK libspdk.so 00:02:23.757 CXX app/trace/trace.o 00:02:23.758 CC app/trace_record/trace_record.o 00:02:23.758 TEST_HEADER include/spdk/accel.h 00:02:23.758 CC app/spdk_lspci/spdk_lspci.o 00:02:23.758 TEST_HEADER include/spdk/accel_module.h 00:02:23.758 CC app/spdk_top/spdk_top.o 00:02:23.758 CC app/spdk_nvme_perf/perf.o 00:02:23.758 TEST_HEADER include/spdk/assert.h 00:02:23.758 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.758 TEST_HEADER include/spdk/barrier.h 00:02:23.758 TEST_HEADER include/spdk/base64.h 00:02:23.758 TEST_HEADER include/spdk/bdev.h 00:02:23.758 CC test/rpc_client/rpc_client_test.o 00:02:23.758 TEST_HEADER include/spdk/bdev_module.h 00:02:23.758 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.758 TEST_HEADER include/spdk/bit_array.h 00:02:23.758 TEST_HEADER include/spdk/bit_pool.h 00:02:23.758 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.758 CC app/spdk_nvme_identify/identify.o 00:02:23.758 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.758 TEST_HEADER include/spdk/blobfs.h 00:02:23.758 TEST_HEADER include/spdk/blob.h 00:02:23.758 TEST_HEADER include/spdk/conf.h 00:02:23.758 TEST_HEADER include/spdk/config.h 00:02:23.758 TEST_HEADER include/spdk/cpuset.h 00:02:23.758 TEST_HEADER include/spdk/crc16.h 00:02:23.758 TEST_HEADER include/spdk/crc32.h 00:02:23.758 TEST_HEADER include/spdk/crc64.h 00:02:23.758 TEST_HEADER include/spdk/dif.h 00:02:23.758 TEST_HEADER include/spdk/dma.h 00:02:23.758 TEST_HEADER include/spdk/endian.h 00:02:23.758 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.758 TEST_HEADER include/spdk/env.h 00:02:23.758 TEST_HEADER include/spdk/event.h 00:02:23.758 TEST_HEADER include/spdk/fd_group.h 00:02:23.758 TEST_HEADER include/spdk/fd.h 00:02:23.758 TEST_HEADER include/spdk/file.h 00:02:23.758 TEST_HEADER include/spdk/ftl.h 00:02:23.758 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.758 TEST_HEADER include/spdk/hexlify.h 00:02:23.758 TEST_HEADER include/spdk/histogram_data.h 00:02:23.758 TEST_HEADER include/spdk/idxd.h 00:02:23.758 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.758 TEST_HEADER include/spdk/init.h 00:02:23.758 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.758 TEST_HEADER include/spdk/ioat.h 00:02:23.758 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.758 TEST_HEADER include/spdk/json.h 00:02:23.758 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.758 TEST_HEADER include/spdk/keyring.h 00:02:23.758 TEST_HEADER include/spdk/keyring_module.h 00:02:23.758 TEST_HEADER include/spdk/likely.h 00:02:23.758 TEST_HEADER include/spdk/lvol.h 00:02:23.758 TEST_HEADER include/spdk/log.h 00:02:23.758 TEST_HEADER include/spdk/memory.h 00:02:23.758 TEST_HEADER include/spdk/mmio.h 00:02:23.758 TEST_HEADER include/spdk/nbd.h 00:02:23.758 TEST_HEADER include/spdk/notify.h 00:02:23.758 TEST_HEADER include/spdk/net.h 00:02:23.758 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.758 TEST_HEADER include/spdk/nvme.h 00:02:23.758 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.758 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.758 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.758 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.758 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.758 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.758 TEST_HEADER include/spdk/nvmf.h 00:02:23.758 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.758 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.758 TEST_HEADER include/spdk/opal_spec.h 00:02:23.758 TEST_HEADER include/spdk/opal.h 00:02:23.758 TEST_HEADER include/spdk/pci_ids.h 00:02:23.758 TEST_HEADER include/spdk/queue.h 00:02:23.758 TEST_HEADER include/spdk/pipe.h 00:02:23.758 TEST_HEADER include/spdk/reduce.h 00:02:23.758 TEST_HEADER include/spdk/rpc.h 00:02:23.758 TEST_HEADER include/spdk/scheduler.h 00:02:23.758 TEST_HEADER include/spdk/scsi.h 00:02:23.758 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.758 TEST_HEADER include/spdk/sock.h 00:02:23.758 TEST_HEADER include/spdk/stdinc.h 00:02:23.758 TEST_HEADER include/spdk/string.h 00:02:23.758 TEST_HEADER include/spdk/thread.h 00:02:23.758 TEST_HEADER include/spdk/trace.h 00:02:23.758 TEST_HEADER include/spdk/trace_parser.h 00:02:23.758 TEST_HEADER include/spdk/tree.h 00:02:23.758 TEST_HEADER include/spdk/ublk.h 00:02:23.758 TEST_HEADER include/spdk/util.h 00:02:23.758 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.758 TEST_HEADER include/spdk/uuid.h 00:02:23.758 TEST_HEADER include/spdk/version.h 00:02:23.758 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.758 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.758 TEST_HEADER include/spdk/vhost.h 00:02:23.758 TEST_HEADER include/spdk/vmd.h 00:02:23.758 TEST_HEADER include/spdk/xor.h 00:02:23.758 TEST_HEADER include/spdk/zipf.h 00:02:23.758 CXX test/cpp_headers/accel.o 00:02:23.758 CXX test/cpp_headers/accel_module.o 00:02:23.758 CXX test/cpp_headers/assert.o 00:02:23.758 CXX test/cpp_headers/barrier.o 00:02:23.758 CXX test/cpp_headers/base64.o 00:02:23.758 CXX test/cpp_headers/bdev.o 00:02:23.758 CXX test/cpp_headers/bdev_module.o 00:02:23.758 CXX test/cpp_headers/bdev_zone.o 00:02:23.758 CXX test/cpp_headers/bit_pool.o 00:02:23.758 CXX test/cpp_headers/bit_array.o 00:02:23.758 CC app/nvmf_tgt/nvmf_main.o 00:02:23.758 CXX test/cpp_headers/blob_bdev.o 00:02:23.758 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.758 CXX test/cpp_headers/blobfs.o 00:02:23.758 CXX test/cpp_headers/blob.o 00:02:23.758 CXX test/cpp_headers/conf.o 00:02:23.758 CXX test/cpp_headers/config.o 00:02:23.758 CXX test/cpp_headers/cpuset.o 00:02:23.758 CXX test/cpp_headers/crc16.o 00:02:23.758 CC app/spdk_dd/spdk_dd.o 00:02:23.758 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.758 CXX test/cpp_headers/crc32.o 00:02:23.758 CC examples/util/zipf/zipf.o 00:02:23.758 CC examples/ioat/verify/verify.o 00:02:23.758 CC test/app/histogram_perf/histogram_perf.o 00:02:23.758 CC test/env/memory/memory_ut.o 00:02:23.758 CC test/env/vtophys/vtophys.o 00:02:23.758 CC examples/ioat/perf/perf.o 00:02:23.758 CC test/thread/poller_perf/poller_perf.o 00:02:23.758 CC test/env/pci/pci_ut.o 00:02:23.758 CC test/app/jsoncat/jsoncat.o 00:02:23.758 CC test/app/stub/stub.o 00:02:23.758 CC app/spdk_tgt/spdk_tgt.o 00:02:23.758 CC app/fio/nvme/fio_plugin.o 00:02:23.758 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.015 CC app/fio/bdev/fio_plugin.o 00:02:24.015 CC test/dma/test_dma/test_dma.o 00:02:24.015 CC test/app/bdev_svc/bdev_svc.o 00:02:24.015 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.015 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.015 LINK spdk_lspci 00:02:24.015 LINK rpc_client_test 00:02:24.015 LINK spdk_nvme_discover 00:02:24.278 LINK histogram_perf 00:02:24.278 LINK vtophys 00:02:24.278 LINK interrupt_tgt 00:02:24.278 CXX test/cpp_headers/crc64.o 00:02:24.278 LINK zipf 00:02:24.278 LINK nvmf_tgt 00:02:24.278 CXX test/cpp_headers/dif.o 00:02:24.278 LINK poller_perf 00:02:24.278 CXX test/cpp_headers/dma.o 00:02:24.278 CXX test/cpp_headers/endian.o 00:02:24.278 CXX test/cpp_headers/env_dpdk.o 00:02:24.278 CXX test/cpp_headers/env.o 00:02:24.278 CXX test/cpp_headers/event.o 00:02:24.278 CXX test/cpp_headers/fd_group.o 00:02:24.278 LINK jsoncat 00:02:24.278 CXX test/cpp_headers/fd.o 00:02:24.278 CXX test/cpp_headers/file.o 00:02:24.278 CXX test/cpp_headers/ftl.o 00:02:24.278 LINK env_dpdk_post_init 00:02:24.278 CXX test/cpp_headers/gpt_spec.o 00:02:24.278 LINK stub 00:02:24.278 LINK iscsi_tgt 00:02:24.278 LINK spdk_trace_record 00:02:24.278 CXX test/cpp_headers/hexlify.o 00:02:24.278 CXX test/cpp_headers/histogram_data.o 00:02:24.278 CXX test/cpp_headers/idxd.o 00:02:24.278 LINK verify 00:02:24.278 LINK bdev_svc 00:02:24.278 CXX test/cpp_headers/idxd_spec.o 00:02:24.278 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.278 LINK ioat_perf 00:02:24.278 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.278 CXX test/cpp_headers/init.o 00:02:24.278 LINK spdk_tgt 00:02:24.546 CXX test/cpp_headers/ioat.o 00:02:24.546 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.546 CXX test/cpp_headers/ioat_spec.o 00:02:24.546 CXX test/cpp_headers/iscsi_spec.o 00:02:24.546 CXX test/cpp_headers/json.o 00:02:24.546 CXX test/cpp_headers/jsonrpc.o 00:02:24.546 CXX test/cpp_headers/keyring.o 00:02:24.546 LINK spdk_dd 00:02:24.546 CXX test/cpp_headers/keyring_module.o 00:02:24.546 CXX test/cpp_headers/likely.o 00:02:24.546 CXX test/cpp_headers/log.o 00:02:24.546 CXX test/cpp_headers/lvol.o 00:02:24.546 CXX test/cpp_headers/memory.o 00:02:24.546 LINK spdk_trace 00:02:24.546 CXX test/cpp_headers/mmio.o 00:02:24.546 CXX test/cpp_headers/nbd.o 00:02:24.546 LINK pci_ut 00:02:24.546 CXX test/cpp_headers/net.o 00:02:24.546 CXX test/cpp_headers/notify.o 00:02:24.546 CXX test/cpp_headers/nvme.o 00:02:24.546 CXX test/cpp_headers/nvme_intel.o 00:02:24.546 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.546 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.546 CXX test/cpp_headers/nvme_spec.o 00:02:24.546 CXX test/cpp_headers/nvme_zns.o 00:02:24.546 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.806 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.806 CXX test/cpp_headers/nvmf.o 00:02:24.806 CXX test/cpp_headers/nvmf_spec.o 00:02:24.806 CXX test/cpp_headers/nvmf_transport.o 00:02:24.806 CXX test/cpp_headers/opal.o 00:02:24.806 LINK test_dma 00:02:24.806 CXX test/cpp_headers/opal_spec.o 00:02:24.806 LINK nvme_fuzz 00:02:24.806 CXX test/cpp_headers/pci_ids.o 00:02:24.806 CC test/event/event_perf/event_perf.o 00:02:24.806 CC examples/sock/hello_world/hello_sock.o 00:02:24.806 CXX test/cpp_headers/pipe.o 00:02:24.806 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.806 CXX test/cpp_headers/queue.o 00:02:24.806 CC test/event/reactor/reactor.o 00:02:24.806 CC examples/idxd/perf/perf.o 00:02:25.066 CXX test/cpp_headers/reduce.o 00:02:25.066 LINK spdk_bdev 00:02:25.066 CXX test/cpp_headers/rpc.o 00:02:25.066 CC test/event/reactor_perf/reactor_perf.o 00:02:25.066 CC examples/thread/thread/thread_ex.o 00:02:25.066 CXX test/cpp_headers/scheduler.o 00:02:25.066 CXX test/cpp_headers/scsi.o 00:02:25.066 CXX test/cpp_headers/scsi_spec.o 00:02:25.066 CXX test/cpp_headers/sock.o 00:02:25.066 CC test/event/app_repeat/app_repeat.o 00:02:25.066 CXX test/cpp_headers/stdinc.o 00:02:25.066 CXX test/cpp_headers/string.o 00:02:25.066 CXX test/cpp_headers/thread.o 00:02:25.066 CC examples/vmd/led/led.o 00:02:25.066 CXX test/cpp_headers/trace.o 00:02:25.066 CXX test/cpp_headers/trace_parser.o 00:02:25.066 LINK spdk_nvme 00:02:25.066 CC test/event/scheduler/scheduler.o 00:02:25.066 CXX test/cpp_headers/tree.o 00:02:25.066 CXX test/cpp_headers/ublk.o 00:02:25.066 CXX test/cpp_headers/util.o 00:02:25.066 CXX test/cpp_headers/uuid.o 00:02:25.066 CXX test/cpp_headers/version.o 00:02:25.066 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.066 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.066 CXX test/cpp_headers/vhost.o 00:02:25.066 CXX test/cpp_headers/vmd.o 00:02:25.066 CXX test/cpp_headers/xor.o 00:02:25.066 CXX test/cpp_headers/zipf.o 00:02:25.066 LINK mem_callbacks 00:02:25.066 LINK spdk_nvme_perf 00:02:25.326 LINK lsvmd 00:02:25.326 LINK event_perf 00:02:25.326 CC app/vhost/vhost.o 00:02:25.326 LINK reactor 00:02:25.326 LINK reactor_perf 00:02:25.326 LINK vhost_fuzz 00:02:25.326 LINK app_repeat 00:02:25.326 LINK led 00:02:25.326 LINK spdk_nvme_identify 00:02:25.326 LINK hello_sock 00:02:25.326 LINK spdk_top 00:02:25.326 CC test/nvme/overhead/overhead.o 00:02:25.326 CC test/nvme/reset/reset.o 00:02:25.326 CC test/nvme/err_injection/err_injection.o 00:02:25.326 CC test/nvme/aer/aer.o 00:02:25.326 CC test/nvme/reserve/reserve.o 00:02:25.326 CC test/nvme/sgl/sgl.o 00:02:25.586 CC test/nvme/e2edp/nvme_dp.o 00:02:25.586 CC test/nvme/startup/startup.o 00:02:25.586 CC test/nvme/simple_copy/simple_copy.o 00:02:25.586 LINK thread 00:02:25.586 CC test/accel/dif/dif.o 00:02:25.586 CC test/blobfs/mkfs/mkfs.o 00:02:25.586 CC test/nvme/connect_stress/connect_stress.o 00:02:25.586 CC test/nvme/boot_partition/boot_partition.o 00:02:25.586 LINK scheduler 00:02:25.586 CC test/nvme/compliance/nvme_compliance.o 00:02:25.586 CC test/lvol/esnap/esnap.o 00:02:25.586 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.586 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.586 CC test/nvme/cuse/cuse.o 00:02:25.586 CC test/nvme/fdp/fdp.o 00:02:25.586 LINK idxd_perf 00:02:25.586 LINK vhost 00:02:25.586 LINK err_injection 00:02:25.844 LINK connect_stress 00:02:25.844 LINK startup 00:02:25.844 LINK reserve 00:02:25.844 LINK reset 00:02:25.844 LINK doorbell_aers 00:02:25.844 LINK fused_ordering 00:02:25.844 CC examples/nvme/abort/abort.o 00:02:25.844 LINK overhead 00:02:25.844 CC examples/nvme/reconnect/reconnect.o 00:02:25.844 CC examples/nvme/hello_world/hello_world.o 00:02:25.844 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.844 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.844 LINK boot_partition 00:02:25.844 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.844 CC examples/nvme/arbitration/arbitration.o 00:02:25.844 CC examples/nvme/hotplug/hotplug.o 00:02:25.844 LINK simple_copy 00:02:25.844 LINK mkfs 00:02:25.844 LINK sgl 00:02:25.844 LINK nvme_compliance 00:02:25.844 LINK aer 00:02:25.844 LINK nvme_dp 00:02:26.102 LINK memory_ut 00:02:26.102 LINK fdp 00:02:26.102 LINK dif 00:02:26.102 CC examples/accel/perf/accel_perf.o 00:02:26.102 LINK cmb_copy 00:02:26.102 CC examples/blob/cli/blobcli.o 00:02:26.102 CC examples/blob/hello_world/hello_blob.o 00:02:26.102 LINK hello_world 00:02:26.102 LINK pmr_persistence 00:02:26.102 LINK hotplug 00:02:26.359 LINK reconnect 00:02:26.359 LINK arbitration 00:02:26.359 LINK abort 00:02:26.359 LINK hello_blob 00:02:26.359 LINK nvme_manage 00:02:26.616 CC test/bdev/bdevio/bdevio.o 00:02:26.616 LINK accel_perf 00:02:26.616 LINK blobcli 00:02:26.616 LINK iscsi_fuzz 00:02:26.898 LINK bdevio 00:02:26.898 CC examples/bdev/hello_world/hello_bdev.o 00:02:26.898 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.155 LINK cuse 00:02:27.155 LINK hello_bdev 00:02:27.719 LINK bdevperf 00:02:27.976 CC examples/nvmf/nvmf/nvmf.o 00:02:28.542 LINK nvmf 00:02:30.441 LINK esnap 00:02:31.006 00:02:31.006 real 0m41.704s 00:02:31.006 user 7m25.696s 00:02:31.006 sys 1m48.759s 00:02:31.006 11:57:38 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:31.006 11:57:38 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.006 ************************************ 00:02:31.006 END TEST make 00:02:31.006 ************************************ 00:02:31.006 11:57:38 -- common/autotest_common.sh@1142 -- $ return 0 00:02:31.006 11:57:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.006 11:57:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.006 11:57:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.006 11:57:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.006 11:57:38 -- pm/common@44 -- $ pid=753593 00:02:31.006 11:57:38 -- pm/common@50 -- $ kill -TERM 753593 00:02:31.006 11:57:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.006 11:57:38 -- pm/common@44 -- $ pid=753595 00:02:31.006 11:57:38 -- pm/common@50 -- $ kill -TERM 753595 00:02:31.006 11:57:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.006 11:57:38 -- pm/common@44 -- $ pid=753597 00:02:31.006 11:57:38 -- pm/common@50 -- $ kill -TERM 753597 00:02:31.006 11:57:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.006 11:57:38 -- pm/common@44 -- $ pid=753625 00:02:31.006 11:57:38 -- pm/common@50 -- $ sudo -E kill -TERM 753625 00:02:31.006 11:57:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.006 11:57:38 -- nvmf/common.sh@7 -- # uname -s 00:02:31.006 11:57:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.006 11:57:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.006 11:57:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.006 11:57:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.006 11:57:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.006 11:57:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.006 11:57:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.006 11:57:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.006 11:57:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.006 11:57:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.006 11:57:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:31.006 11:57:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:31.006 11:57:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.006 11:57:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.006 11:57:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:31.006 11:57:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:31.006 11:57:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.006 11:57:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.006 11:57:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.006 11:57:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.006 11:57:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.006 11:57:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.006 11:57:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.006 11:57:38 -- paths/export.sh@5 -- # export PATH 00:02:31.006 11:57:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.006 11:57:38 -- nvmf/common.sh@47 -- # : 0 00:02:31.006 11:57:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:31.006 11:57:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:31.006 11:57:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:31.006 11:57:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.006 11:57:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.006 11:57:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:31.006 11:57:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:31.006 11:57:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:31.006 11:57:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.006 11:57:38 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.006 11:57:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.006 11:57:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.006 11:57:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.006 11:57:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.006 11:57:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.006 11:57:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.006 11:57:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.006 11:57:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.006 11:57:38 -- spdk/autotest.sh@48 -- # udevadm_pid=825544 00:02:31.006 11:57:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.006 11:57:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:31.006 11:57:38 -- pm/common@17 -- # local monitor 00:02:31.006 11:57:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@21 -- # date +%s 00:02:31.006 11:57:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.006 11:57:38 -- pm/common@21 -- # date +%s 00:02:31.006 11:57:38 -- pm/common@25 -- # sleep 1 00:02:31.006 11:57:38 -- pm/common@21 -- # date +%s 00:02:31.006 11:57:38 -- pm/common@21 -- # date +%s 00:02:31.006 11:57:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721642258 00:02:31.006 11:57:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721642258 00:02:31.006 11:57:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721642258 00:02:31.006 11:57:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721642258 00:02:31.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721642258_collect-vmstat.pm.log 00:02:31.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721642258_collect-cpu-load.pm.log 00:02:31.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721642258_collect-cpu-temp.pm.log 00:02:31.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721642258_collect-bmc-pm.bmc.pm.log 00:02:31.934 11:57:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:31.934 11:57:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:31.934 11:57:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:31.934 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:02:31.934 11:57:39 -- spdk/autotest.sh@59 -- # create_test_list 00:02:31.934 11:57:39 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:31.934 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:02:31.934 11:57:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:31.934 11:57:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.934 11:57:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.934 11:57:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:31.934 11:57:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.934 11:57:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:31.934 11:57:39 -- common/autotest_common.sh@1455 -- # uname 00:02:31.934 11:57:39 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:31.934 11:57:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:31.934 11:57:39 -- common/autotest_common.sh@1475 -- # uname 00:02:31.934 11:57:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:31.934 11:57:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:31.934 11:57:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:31.934 11:57:39 -- spdk/autotest.sh@72 -- # hash lcov 00:02:31.934 11:57:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:31.934 11:57:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:31.934 --rc lcov_branch_coverage=1 00:02:31.934 --rc lcov_function_coverage=1 00:02:31.934 --rc genhtml_branch_coverage=1 00:02:31.934 --rc genhtml_function_coverage=1 00:02:31.934 --rc genhtml_legend=1 00:02:31.934 --rc geninfo_all_blocks=1 00:02:31.934 ' 00:02:31.934 11:57:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:31.934 --rc lcov_branch_coverage=1 00:02:31.934 --rc lcov_function_coverage=1 00:02:31.934 --rc genhtml_branch_coverage=1 00:02:31.934 --rc genhtml_function_coverage=1 00:02:31.934 --rc genhtml_legend=1 00:02:31.934 --rc geninfo_all_blocks=1 00:02:31.934 ' 00:02:31.934 11:57:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:31.934 --rc lcov_branch_coverage=1 00:02:31.934 --rc lcov_function_coverage=1 00:02:31.934 --rc genhtml_branch_coverage=1 00:02:31.934 --rc genhtml_function_coverage=1 00:02:31.934 --rc genhtml_legend=1 00:02:31.934 --rc geninfo_all_blocks=1 00:02:31.934 --no-external' 00:02:31.934 11:57:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:31.934 --rc lcov_branch_coverage=1 00:02:31.934 --rc lcov_function_coverage=1 00:02:31.934 --rc genhtml_branch_coverage=1 00:02:31.934 --rc genhtml_function_coverage=1 00:02:31.934 --rc genhtml_legend=1 00:02:31.934 --rc geninfo_all_blocks=1 00:02:31.934 --no-external' 00:02:31.934 11:57:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:32.191 lcov: LCOV version 1.14 00:02:32.191 11:57:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:50.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:50.289 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:02.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:02.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:02.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:02.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:05.758 11:58:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:05.758 11:58:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.758 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:05.758 11:58:13 -- spdk/autotest.sh@91 -- # rm -f 00:03:05.758 11:58:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.691 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:06.949 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:06.949 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:06.949 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:06.949 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:06.949 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:06.949 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:06.949 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:06.949 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:06.949 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:06.949 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:06.949 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:06.949 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:06.949 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:06.949 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:06.949 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:06.949 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:07.206 11:58:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:07.206 11:58:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:07.206 11:58:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:07.206 11:58:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:07.206 11:58:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:07.206 11:58:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:07.206 11:58:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:07.206 11:58:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.206 11:58:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:07.206 11:58:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:07.206 11:58:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.206 11:58:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.206 11:58:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:07.206 11:58:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:07.206 11:58:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.206 No valid GPT data, bailing 00:03:07.206 11:58:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.206 11:58:14 -- scripts/common.sh@391 -- # pt= 00:03:07.206 11:58:14 -- scripts/common.sh@392 -- # return 1 00:03:07.206 11:58:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.206 1+0 records in 00:03:07.206 1+0 records out 00:03:07.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0028704 s, 365 MB/s 00:03:07.206 11:58:14 -- spdk/autotest.sh@118 -- # sync 00:03:07.206 11:58:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.206 11:58:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.206 11:58:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:09.105 11:58:16 -- spdk/autotest.sh@124 -- # uname -s 00:03:09.105 11:58:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:09.105 11:58:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.105 11:58:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.105 11:58:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.105 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:03:09.105 ************************************ 00:03:09.105 START TEST setup.sh 00:03:09.105 ************************************ 00:03:09.105 11:58:16 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.105 * Looking for test storage... 00:03:09.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.105 11:58:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:09.105 11:58:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:09.105 11:58:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.105 11:58:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.105 11:58:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.105 11:58:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.105 ************************************ 00:03:09.105 START TEST acl 00:03:09.105 ************************************ 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.105 * Looking for test storage... 00:03:09.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.105 11:58:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.105 11:58:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:09.105 11:58:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:09.105 11:58:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:09.105 11:58:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:09.105 11:58:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:09.105 11:58:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:09.105 11:58:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.105 11:58:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.478 11:58:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:10.478 11:58:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:10.478 11:58:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.478 11:58:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:10.478 11:58:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.478 11:58:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:11.891 Hugepages 00:03:11.891 node hugesize free / total 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 00:03:11.891 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.891 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:11.892 11:58:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:11.892 11:58:19 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.892 11:58:19 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.892 11:58:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.892 ************************************ 00:03:11.892 START TEST denied 00:03:11.892 ************************************ 00:03:11.892 11:58:19 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:11.892 11:58:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:11.892 11:58:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:11.892 11:58:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:11.892 11:58:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.892 11:58:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.788 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.788 11:58:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.688 00:03:15.688 real 0m3.852s 00:03:15.688 user 0m1.093s 00:03:15.688 sys 0m1.831s 00:03:15.688 11:58:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.688 11:58:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:15.688 ************************************ 00:03:15.688 END TEST denied 00:03:15.688 ************************************ 00:03:15.688 11:58:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:15.688 11:58:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:15.688 11:58:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.688 11:58:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.688 11:58:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.688 ************************************ 00:03:15.688 START TEST allowed 00:03:15.688 ************************************ 00:03:15.688 11:58:23 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:15.688 11:58:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:15.688 11:58:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:15.688 11:58:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:15.688 11:58:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.688 11:58:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.221 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.221 11:58:25 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:18.221 11:58:25 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:18.221 11:58:25 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:18.221 11:58:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.221 11:58:25 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.595 00:03:19.595 real 0m3.889s 00:03:19.595 user 0m0.983s 00:03:19.595 sys 0m1.730s 00:03:19.595 11:58:27 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.595 11:58:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:19.595 ************************************ 00:03:19.595 END TEST allowed 00:03:19.595 ************************************ 00:03:19.595 11:58:27 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:19.595 00:03:19.595 real 0m10.661s 00:03:19.595 user 0m3.227s 00:03:19.595 sys 0m5.386s 00:03:19.595 11:58:27 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.595 11:58:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:19.595 ************************************ 00:03:19.595 END TEST acl 00:03:19.595 ************************************ 00:03:19.595 11:58:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:19.595 11:58:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:19.595 11:58:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.595 11:58:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.595 11:58:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.854 ************************************ 00:03:19.854 START TEST hugepages 00:03:19.854 ************************************ 00:03:19.854 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:19.854 * Looking for test storage... 00:03:19.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42245568 kB' 'MemAvailable: 45752668 kB' 'Buffers: 2704 kB' 'Cached: 11739872 kB' 'SwapCached: 0 kB' 'Active: 8729156 kB' 'Inactive: 3506384 kB' 'Active(anon): 8334748 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496816 kB' 'Mapped: 181492 kB' 'Shmem: 7841784 kB' 'KReclaimable: 199864 kB' 'Slab: 571592 kB' 'SReclaimable: 199864 kB' 'SUnreclaim: 371728 kB' 'KernelStack: 12944 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9472796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.854 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.855 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:19.856 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:19.856 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.856 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.856 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.856 ************************************ 00:03:19.856 START TEST default_setup 00:03:19.856 ************************************ 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.856 11:58:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.230 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.230 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.230 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:22.172 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44332036 kB' 'MemAvailable: 47839124 kB' 'Buffers: 2704 kB' 'Cached: 11739960 kB' 'SwapCached: 0 kB' 'Active: 8748428 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515012 kB' 'Mapped: 181540 kB' 'Shmem: 7841872 kB' 'KReclaimable: 199840 kB' 'Slab: 570904 kB' 'SReclaimable: 199840 kB' 'SUnreclaim: 371064 kB' 'KernelStack: 12736 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.172 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.173 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44334100 kB' 'MemAvailable: 47841188 kB' 'Buffers: 2704 kB' 'Cached: 11739964 kB' 'SwapCached: 0 kB' 'Active: 8748212 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514780 kB' 'Mapped: 181540 kB' 'Shmem: 7841876 kB' 'KReclaimable: 199840 kB' 'Slab: 570872 kB' 'SReclaimable: 199840 kB' 'SUnreclaim: 371032 kB' 'KernelStack: 12768 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.174 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44343372 kB' 'MemAvailable: 47850460 kB' 'Buffers: 2704 kB' 'Cached: 11739980 kB' 'SwapCached: 0 kB' 'Active: 8747728 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353320 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514656 kB' 'Mapped: 181540 kB' 'Shmem: 7841892 kB' 'KReclaimable: 199840 kB' 'Slab: 570996 kB' 'SReclaimable: 199840 kB' 'SUnreclaim: 371156 kB' 'KernelStack: 12784 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.177 nr_hugepages=1024 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.177 resv_hugepages=0 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.177 surplus_hugepages=0 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.177 anon_hugepages=0 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44343720 kB' 'MemAvailable: 47850808 kB' 'Buffers: 2704 kB' 'Cached: 11740004 kB' 'SwapCached: 0 kB' 'Active: 8747716 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353308 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514620 kB' 'Mapped: 181540 kB' 'Shmem: 7841916 kB' 'KReclaimable: 199840 kB' 'Slab: 570996 kB' 'SReclaimable: 199840 kB' 'SUnreclaim: 371156 kB' 'KernelStack: 12768 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9498832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.178 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26095464 kB' 'MemUsed: 6734420 kB' 'SwapCached: 0 kB' 'Active: 3494728 kB' 'Inactive: 153096 kB' 'Active(anon): 3334076 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3305992 kB' 'Mapped: 91684 kB' 'AnonPages: 345264 kB' 'Shmem: 2992244 kB' 'KernelStack: 7928 kB' 'PageTables: 4864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98052 kB' 'Slab: 324688 kB' 'SReclaimable: 98052 kB' 'SUnreclaim: 226636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.179 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.180 node0=1024 expecting 1024 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.180 00:03:22.180 real 0m2.413s 00:03:22.180 user 0m0.648s 00:03:22.180 sys 0m0.886s 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.180 11:58:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:22.180 ************************************ 00:03:22.180 END TEST default_setup 00:03:22.180 ************************************ 00:03:22.438 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.438 11:58:30 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.438 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.438 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.439 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.439 ************************************ 00:03:22.439 START TEST per_node_1G_alloc 00:03:22.439 ************************************ 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.439 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.817 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.817 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.817 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.817 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.817 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.817 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.817 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.817 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.817 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.817 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.817 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.817 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.817 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.817 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.817 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.817 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.817 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44343232 kB' 'MemAvailable: 47850312 kB' 'Buffers: 2704 kB' 'Cached: 11740084 kB' 'SwapCached: 0 kB' 'Active: 8748308 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353900 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515112 kB' 'Mapped: 181600 kB' 'Shmem: 7841996 kB' 'KReclaimable: 199824 kB' 'Slab: 571044 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371220 kB' 'KernelStack: 12736 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44342512 kB' 'MemAvailable: 47849592 kB' 'Buffers: 2704 kB' 'Cached: 11740088 kB' 'SwapCached: 0 kB' 'Active: 8748832 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354424 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515680 kB' 'Mapped: 181552 kB' 'Shmem: 7842000 kB' 'KReclaimable: 199824 kB' 'Slab: 571076 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371252 kB' 'KernelStack: 12752 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44343228 kB' 'MemAvailable: 47850308 kB' 'Buffers: 2704 kB' 'Cached: 11740104 kB' 'SwapCached: 0 kB' 'Active: 8748592 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354184 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515464 kB' 'Mapped: 181552 kB' 'Shmem: 7842016 kB' 'KReclaimable: 199824 kB' 'Slab: 571076 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371252 kB' 'KernelStack: 12784 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.823 nr_hugepages=1024 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.823 resv_hugepages=0 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.823 surplus_hugepages=0 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.823 anon_hugepages=0 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44343228 kB' 'MemAvailable: 47850308 kB' 'Buffers: 2704 kB' 'Cached: 11740104 kB' 'SwapCached: 0 kB' 'Active: 8748304 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353896 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515176 kB' 'Mapped: 181552 kB' 'Shmem: 7842016 kB' 'KReclaimable: 199824 kB' 'Slab: 571076 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371252 kB' 'KernelStack: 12784 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9493916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27152480 kB' 'MemUsed: 5677404 kB' 'SwapCached: 0 kB' 'Active: 3495132 kB' 'Inactive: 153096 kB' 'Active(anon): 3334480 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3306048 kB' 'Mapped: 91696 kB' 'AnonPages: 345376 kB' 'Shmem: 2992300 kB' 'KernelStack: 7976 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98044 kB' 'Slab: 324676 kB' 'SReclaimable: 98044 kB' 'SUnreclaim: 226632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17190496 kB' 'MemUsed: 10521328 kB' 'SwapCached: 0 kB' 'Active: 5253496 kB' 'Inactive: 3353288 kB' 'Active(anon): 5019740 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8436824 kB' 'Mapped: 89856 kB' 'AnonPages: 170040 kB' 'Shmem: 4849780 kB' 'KernelStack: 4792 kB' 'PageTables: 2984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101780 kB' 'Slab: 246400 kB' 'SReclaimable: 101780 kB' 'SUnreclaim: 144620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.828 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.829 node0=512 expecting 512 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.829 node1=512 expecting 512 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.829 00:03:23.829 real 0m1.522s 00:03:23.829 user 0m0.666s 00:03:23.829 sys 0m0.820s 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.829 11:58:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.829 ************************************ 00:03:23.829 END TEST per_node_1G_alloc 00:03:23.829 ************************************ 00:03:23.829 11:58:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.829 11:58:31 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:23.829 11:58:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.829 11:58:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.829 11:58:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.829 ************************************ 00:03:23.829 START TEST even_2G_alloc 00:03:23.829 ************************************ 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.829 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.210 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.210 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.210 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.210 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.210 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.210 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.210 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.210 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.210 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.210 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.210 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.210 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.210 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.210 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.210 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.210 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.210 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.210 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44340816 kB' 'MemAvailable: 47847896 kB' 'Buffers: 2704 kB' 'Cached: 11740216 kB' 'SwapCached: 0 kB' 'Active: 8749092 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515664 kB' 'Mapped: 181564 kB' 'Shmem: 7842128 kB' 'KReclaimable: 199824 kB' 'Slab: 571064 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371240 kB' 'KernelStack: 12752 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9494120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.211 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.212 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44340816 kB' 'MemAvailable: 47847896 kB' 'Buffers: 2704 kB' 'Cached: 11740220 kB' 'SwapCached: 0 kB' 'Active: 8749288 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354880 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515876 kB' 'Mapped: 181560 kB' 'Shmem: 7842132 kB' 'KReclaimable: 199824 kB' 'Slab: 571064 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371240 kB' 'KernelStack: 12800 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9494136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.213 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.214 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.215 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44341416 kB' 'MemAvailable: 47848496 kB' 'Buffers: 2704 kB' 'Cached: 11740236 kB' 'SwapCached: 0 kB' 'Active: 8748924 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354516 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515476 kB' 'Mapped: 181560 kB' 'Shmem: 7842148 kB' 'KReclaimable: 199824 kB' 'Slab: 571072 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371248 kB' 'KernelStack: 12784 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9494156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.216 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.217 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.218 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.219 nr_hugepages=1024 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.219 resv_hugepages=0 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.219 surplus_hugepages=0 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.219 anon_hugepages=0 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44341896 kB' 'MemAvailable: 47848976 kB' 'Buffers: 2704 kB' 'Cached: 11740260 kB' 'SwapCached: 0 kB' 'Active: 8748976 kB' 'Inactive: 3506384 kB' 'Active(anon): 8354568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515548 kB' 'Mapped: 181560 kB' 'Shmem: 7842172 kB' 'KReclaimable: 199824 kB' 'Slab: 571072 kB' 'SReclaimable: 199824 kB' 'SUnreclaim: 371248 kB' 'KernelStack: 12816 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9494180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.219 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.220 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.221 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27149632 kB' 'MemUsed: 5680252 kB' 'SwapCached: 0 kB' 'Active: 3495332 kB' 'Inactive: 153096 kB' 'Active(anon): 3334680 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3306056 kB' 'Mapped: 91704 kB' 'AnonPages: 345448 kB' 'Shmem: 2992308 kB' 'KernelStack: 8008 kB' 'PageTables: 4976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98044 kB' 'Slab: 324648 kB' 'SReclaimable: 98044 kB' 'SUnreclaim: 226604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.481 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17192268 kB' 'MemUsed: 10519556 kB' 'SwapCached: 0 kB' 'Active: 5253340 kB' 'Inactive: 3353288 kB' 'Active(anon): 5019584 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8436932 kB' 'Mapped: 89856 kB' 'AnonPages: 169780 kB' 'Shmem: 4849888 kB' 'KernelStack: 4792 kB' 'PageTables: 3028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101780 kB' 'Slab: 246424 kB' 'SReclaimable: 101780 kB' 'SUnreclaim: 144644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.482 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.483 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.484 node0=512 expecting 512 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:25.484 node1=512 expecting 512 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.484 00:03:25.484 real 0m1.480s 00:03:25.484 user 0m0.658s 00:03:25.484 sys 0m0.786s 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.484 11:58:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.484 ************************************ 00:03:25.484 END TEST even_2G_alloc 00:03:25.484 ************************************ 00:03:25.484 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.484 11:58:33 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:25.484 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.484 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.484 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.484 ************************************ 00:03:25.484 START TEST odd_alloc 00:03:25.484 ************************************ 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.484 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.864 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.864 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.864 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.864 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.864 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.864 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.864 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.864 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.864 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.864 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.864 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.864 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.864 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.864 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.864 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.864 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.864 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44333476 kB' 'MemAvailable: 47840552 kB' 'Buffers: 2704 kB' 'Cached: 11740352 kB' 'SwapCached: 0 kB' 'Active: 8745600 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351192 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512344 kB' 'Mapped: 180568 kB' 'Shmem: 7842264 kB' 'KReclaimable: 199816 kB' 'Slab: 571164 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371348 kB' 'KernelStack: 12736 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9480612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.864 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.865 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44338328 kB' 'MemAvailable: 47845404 kB' 'Buffers: 2704 kB' 'Cached: 11740356 kB' 'SwapCached: 0 kB' 'Active: 8745832 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351424 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512588 kB' 'Mapped: 180564 kB' 'Shmem: 7842268 kB' 'KReclaimable: 199816 kB' 'Slab: 571164 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371348 kB' 'KernelStack: 12768 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9480628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.866 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44338628 kB' 'MemAvailable: 47845704 kB' 'Buffers: 2704 kB' 'Cached: 11740376 kB' 'SwapCached: 0 kB' 'Active: 8745816 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351408 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512468 kB' 'Mapped: 180488 kB' 'Shmem: 7842288 kB' 'KReclaimable: 199816 kB' 'Slab: 571136 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371320 kB' 'KernelStack: 12784 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9480648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.867 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.868 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:26.869 nr_hugepages=1025 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.869 resv_hugepages=0 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.869 surplus_hugepages=0 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.869 anon_hugepages=0 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44338288 kB' 'MemAvailable: 47845364 kB' 'Buffers: 2704 kB' 'Cached: 11740396 kB' 'SwapCached: 0 kB' 'Active: 8745852 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351444 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512472 kB' 'Mapped: 180488 kB' 'Shmem: 7842308 kB' 'KReclaimable: 199816 kB' 'Slab: 571136 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371320 kB' 'KernelStack: 12784 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9480668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.869 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.870 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27146804 kB' 'MemUsed: 5683080 kB' 'SwapCached: 0 kB' 'Active: 3493132 kB' 'Inactive: 153096 kB' 'Active(anon): 3332480 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3306064 kB' 'Mapped: 90648 kB' 'AnonPages: 343360 kB' 'Shmem: 2992316 kB' 'KernelStack: 7960 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98044 kB' 'Slab: 324628 kB' 'SReclaimable: 98044 kB' 'SUnreclaim: 226584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.871 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17191764 kB' 'MemUsed: 10520060 kB' 'SwapCached: 0 kB' 'Active: 5252712 kB' 'Inactive: 3353288 kB' 'Active(anon): 5018956 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437060 kB' 'Mapped: 89840 kB' 'AnonPages: 169112 kB' 'Shmem: 4850016 kB' 'KernelStack: 4824 kB' 'PageTables: 2916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101772 kB' 'Slab: 246508 kB' 'SReclaimable: 101772 kB' 'SUnreclaim: 144736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.872 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.873 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:26.874 node0=512 expecting 513 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:26.874 node1=513 expecting 512 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:26.874 00:03:26.874 real 0m1.489s 00:03:26.874 user 0m0.600s 00:03:26.874 sys 0m0.850s 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.874 11:58:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.874 ************************************ 00:03:26.874 END TEST odd_alloc 00:03:26.874 ************************************ 00:03:26.874 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.874 11:58:34 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:26.874 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.874 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.874 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.874 ************************************ 00:03:26.874 START TEST custom_alloc 00:03:26.874 ************************************ 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.874 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.257 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.257 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.257 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.257 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.257 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.257 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.257 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.257 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.257 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.257 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.257 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.257 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.257 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.257 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.257 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.257 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.257 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43294296 kB' 'MemAvailable: 46801372 kB' 'Buffers: 2704 kB' 'Cached: 11740480 kB' 'SwapCached: 0 kB' 'Active: 8747872 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353464 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514276 kB' 'Mapped: 181036 kB' 'Shmem: 7842392 kB' 'KReclaimable: 199816 kB' 'Slab: 571128 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371312 kB' 'KernelStack: 12736 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9483800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.257 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.258 11:58:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43290980 kB' 'MemAvailable: 46798056 kB' 'Buffers: 2704 kB' 'Cached: 11740480 kB' 'SwapCached: 0 kB' 'Active: 8751280 kB' 'Inactive: 3506384 kB' 'Active(anon): 8356872 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517656 kB' 'Mapped: 180996 kB' 'Shmem: 7842392 kB' 'KReclaimable: 199816 kB' 'Slab: 571120 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371304 kB' 'KernelStack: 12800 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9487000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.258 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43290620 kB' 'MemAvailable: 46797696 kB' 'Buffers: 2704 kB' 'Cached: 11740500 kB' 'SwapCached: 0 kB' 'Active: 8751836 kB' 'Inactive: 3506384 kB' 'Active(anon): 8357428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518244 kB' 'Mapped: 181404 kB' 'Shmem: 7842412 kB' 'KReclaimable: 199816 kB' 'Slab: 571160 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371344 kB' 'KernelStack: 12784 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9487020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.259 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.260 nr_hugepages=1536 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.260 resv_hugepages=0 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.260 surplus_hugepages=0 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.260 anon_hugepages=0 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43290924 kB' 'MemAvailable: 46798000 kB' 'Buffers: 2704 kB' 'Cached: 11740504 kB' 'SwapCached: 0 kB' 'Active: 8745944 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351536 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512372 kB' 'Mapped: 180968 kB' 'Shmem: 7842416 kB' 'KReclaimable: 199816 kB' 'Slab: 571160 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371344 kB' 'KernelStack: 12752 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9480924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.260 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27153800 kB' 'MemUsed: 5676084 kB' 'SwapCached: 0 kB' 'Active: 3493388 kB' 'Inactive: 153096 kB' 'Active(anon): 3332736 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3306076 kB' 'Mapped: 90660 kB' 'AnonPages: 343624 kB' 'Shmem: 2992328 kB' 'KernelStack: 8008 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98044 kB' 'Slab: 324628 kB' 'SReclaimable: 98044 kB' 'SUnreclaim: 226584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.261 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16136116 kB' 'MemUsed: 11575708 kB' 'SwapCached: 0 kB' 'Active: 5252608 kB' 'Inactive: 3353288 kB' 'Active(anon): 5018852 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3353288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437188 kB' 'Mapped: 89840 kB' 'AnonPages: 168708 kB' 'Shmem: 4850144 kB' 'KernelStack: 4760 kB' 'PageTables: 2764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101772 kB' 'Slab: 246532 kB' 'SReclaimable: 101772 kB' 'SUnreclaim: 144760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.262 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.263 node0=512 expecting 512 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.263 node1=1024 expecting 1024 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.263 00:03:28.263 real 0m1.364s 00:03:28.263 user 0m0.543s 00:03:28.263 sys 0m0.773s 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.263 11:58:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.263 ************************************ 00:03:28.263 END TEST custom_alloc 00:03:28.263 ************************************ 00:03:28.263 11:58:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.263 11:58:36 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.263 11:58:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.263 11:58:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.263 11:58:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.263 ************************************ 00:03:28.263 START TEST no_shrink_alloc 00:03:28.263 ************************************ 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.263 11:58:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.639 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.639 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.639 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.639 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.639 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.639 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.639 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.639 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.639 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.639 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.639 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.639 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.639 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.639 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.639 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.639 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.639 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.639 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44300744 kB' 'MemAvailable: 47807820 kB' 'Buffers: 2704 kB' 'Cached: 11740608 kB' 'SwapCached: 0 kB' 'Active: 8747032 kB' 'Inactive: 3506384 kB' 'Active(anon): 8352624 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513420 kB' 'Mapped: 180636 kB' 'Shmem: 7842520 kB' 'KReclaimable: 199816 kB' 'Slab: 571276 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371460 kB' 'KernelStack: 12848 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9481944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.640 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44300744 kB' 'MemAvailable: 47807820 kB' 'Buffers: 2704 kB' 'Cached: 11740608 kB' 'SwapCached: 0 kB' 'Active: 8747780 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353372 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513736 kB' 'Mapped: 180576 kB' 'Shmem: 7842520 kB' 'KReclaimable: 199816 kB' 'Slab: 571276 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371460 kB' 'KernelStack: 12864 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9481208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.641 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.642 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44303728 kB' 'MemAvailable: 47810804 kB' 'Buffers: 2704 kB' 'Cached: 11740632 kB' 'SwapCached: 0 kB' 'Active: 8746276 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351868 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512548 kB' 'Mapped: 180516 kB' 'Shmem: 7842544 kB' 'KReclaimable: 199816 kB' 'Slab: 571236 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371420 kB' 'KernelStack: 12800 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9481228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.643 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.644 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.645 nr_hugepages=1024 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.645 resv_hugepages=0 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.645 surplus_hugepages=0 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.645 anon_hugepages=0 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44304736 kB' 'MemAvailable: 47811812 kB' 'Buffers: 2704 kB' 'Cached: 11740652 kB' 'SwapCached: 0 kB' 'Active: 8746384 kB' 'Inactive: 3506384 kB' 'Active(anon): 8351976 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512672 kB' 'Mapped: 180516 kB' 'Shmem: 7842564 kB' 'KReclaimable: 199816 kB' 'Slab: 571236 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371420 kB' 'KernelStack: 12832 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9481252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.645 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26093484 kB' 'MemUsed: 6736400 kB' 'SwapCached: 0 kB' 'Active: 3493628 kB' 'Inactive: 153096 kB' 'Active(anon): 3332976 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3306144 kB' 'Mapped: 90676 kB' 'AnonPages: 343776 kB' 'Shmem: 2992396 kB' 'KernelStack: 8008 kB' 'PageTables: 4888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98044 kB' 'Slab: 324588 kB' 'SReclaimable: 98044 kB' 'SUnreclaim: 226544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.904 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.905 node0=1024 expecting 1024 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.905 11:58:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.838 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.838 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.838 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.838 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.838 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.838 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.838 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.838 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.838 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.838 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.838 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.838 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.838 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.838 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.838 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.838 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.838 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:31.102 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44312260 kB' 'MemAvailable: 47819336 kB' 'Buffers: 2704 kB' 'Cached: 11740720 kB' 'SwapCached: 0 kB' 'Active: 8746776 kB' 'Inactive: 3506384 kB' 'Active(anon): 8352368 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512944 kB' 'Mapped: 180604 kB' 'Shmem: 7842632 kB' 'KReclaimable: 199816 kB' 'Slab: 571144 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371328 kB' 'KernelStack: 12816 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9481596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.102 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.103 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44312264 kB' 'MemAvailable: 47819340 kB' 'Buffers: 2704 kB' 'Cached: 11740720 kB' 'SwapCached: 0 kB' 'Active: 8747636 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353228 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513812 kB' 'Mapped: 180604 kB' 'Shmem: 7842632 kB' 'KReclaimable: 199816 kB' 'Slab: 571140 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371324 kB' 'KernelStack: 12848 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9481244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.104 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.105 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44314608 kB' 'MemAvailable: 47821684 kB' 'Buffers: 2704 kB' 'Cached: 11740744 kB' 'SwapCached: 0 kB' 'Active: 8746684 kB' 'Inactive: 3506384 kB' 'Active(anon): 8352276 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512876 kB' 'Mapped: 180524 kB' 'Shmem: 7842656 kB' 'KReclaimable: 199816 kB' 'Slab: 571252 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371436 kB' 'KernelStack: 12880 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9482632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.106 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.107 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.108 nr_hugepages=1024 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.108 resv_hugepages=0 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.108 surplus_hugepages=0 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.108 anon_hugepages=0 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44311704 kB' 'MemAvailable: 47818780 kB' 'Buffers: 2704 kB' 'Cached: 11740764 kB' 'SwapCached: 0 kB' 'Active: 8748392 kB' 'Inactive: 3506384 kB' 'Active(anon): 8353984 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514580 kB' 'Mapped: 180524 kB' 'Shmem: 7842676 kB' 'KReclaimable: 199816 kB' 'Slab: 571252 kB' 'SReclaimable: 199816 kB' 'SUnreclaim: 371436 kB' 'KernelStack: 13152 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9484016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1863260 kB' 'DirectMap2M: 13785088 kB' 'DirectMap1G: 53477376 kB' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.108 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.109 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26113416 kB' 'MemUsed: 6716468 kB' 'SwapCached: 0 kB' 'Active: 3493496 kB' 'Inactive: 153096 kB' 'Active(anon): 3332844 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 153096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3306248 kB' 'Mapped: 90684 kB' 'AnonPages: 343484 kB' 'Shmem: 2992500 kB' 'KernelStack: 7976 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98044 kB' 'Slab: 324768 kB' 'SReclaimable: 98044 kB' 'SUnreclaim: 226724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.110 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.111 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.112 node0=1024 expecting 1024 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.112 00:03:31.112 real 0m2.761s 00:03:31.112 user 0m1.131s 00:03:31.112 sys 0m1.549s 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.112 11:58:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.112 ************************************ 00:03:31.112 END TEST no_shrink_alloc 00:03:31.112 ************************************ 00:03:31.112 11:58:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.112 11:58:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.112 00:03:31.112 real 0m11.430s 00:03:31.112 user 0m4.411s 00:03:31.112 sys 0m5.921s 00:03:31.112 11:58:38 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.112 11:58:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.112 ************************************ 00:03:31.112 END TEST hugepages 00:03:31.112 ************************************ 00:03:31.112 11:58:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:31.112 11:58:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.112 11:58:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.112 11:58:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.112 11:58:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.112 ************************************ 00:03:31.112 START TEST driver 00:03:31.112 ************************************ 00:03:31.112 11:58:39 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.371 * Looking for test storage... 00:03:31.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.371 11:58:39 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:31.371 11:58:39 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.371 11:58:39 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.901 11:58:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:33.901 11:58:41 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.901 11:58:41 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.901 11:58:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:33.901 ************************************ 00:03:33.901 START TEST guess_driver 00:03:33.901 ************************************ 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:33.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:33.901 Looking for driver=vfio-pci 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.901 11:58:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.836 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.837 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.774 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.774 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.774 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.031 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.031 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:36.031 11:58:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.031 11:58:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.561 00:03:38.561 real 0m4.775s 00:03:38.561 user 0m1.092s 00:03:38.561 sys 0m1.788s 00:03:38.561 11:58:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.561 11:58:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.561 ************************************ 00:03:38.561 END TEST guess_driver 00:03:38.561 ************************************ 00:03:38.561 11:58:46 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:38.561 00:03:38.561 real 0m7.223s 00:03:38.561 user 0m1.630s 00:03:38.561 sys 0m2.714s 00:03:38.561 11:58:46 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.561 11:58:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.561 ************************************ 00:03:38.561 END TEST driver 00:03:38.561 ************************************ 00:03:38.561 11:58:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.561 11:58:46 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.561 11:58:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.561 11:58:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.561 11:58:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.561 ************************************ 00:03:38.561 START TEST devices 00:03:38.561 ************************************ 00:03:38.561 11:58:46 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.561 * Looking for test storage... 00:03:38.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.561 11:58:46 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:38.561 11:58:46 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:38.561 11:58:46 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.561 11:58:46 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.934 11:58:47 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:39.934 11:58:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:39.934 11:58:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:39.934 11:58:47 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:39.934 No valid GPT data, bailing 00:03:39.934 11:58:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.193 11:58:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.193 11:58:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.193 11:58:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.193 11:58:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.193 11:58:47 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.193 11:58:47 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.193 11:58:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.193 11:58:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.193 11:58:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.193 ************************************ 00:03:40.193 START TEST nvme_mount 00:03:40.193 ************************************ 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.193 11:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:41.127 Creating new GPT entries in memory. 00:03:41.127 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.127 other utilities. 00:03:41.127 11:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.127 11:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.127 11:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.127 11:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.127 11:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:42.059 Creating new GPT entries in memory. 00:03:42.059 The operation has completed successfully. 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 845624 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:42.059 11:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.317 11:58:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.250 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.251 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.510 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.510 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.768 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:43.768 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:43.768 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.768 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.768 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.142 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.075 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.076 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.334 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.334 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.334 00:03:46.335 real 0m6.259s 00:03:46.335 user 0m1.417s 00:03:46.335 sys 0m2.437s 00:03:46.335 11:58:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.335 11:58:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.335 ************************************ 00:03:46.335 END TEST nvme_mount 00:03:46.335 ************************************ 00:03:46.335 11:58:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:46.335 11:58:54 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:46.335 11:58:54 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.335 11:58:54 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.335 11:58:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.335 ************************************ 00:03:46.335 START TEST dm_mount 00:03:46.335 ************************************ 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.335 11:58:54 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:47.708 Creating new GPT entries in memory. 00:03:47.708 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.708 other utilities. 00:03:47.708 11:58:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.708 11:58:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.708 11:58:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.708 11:58:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.708 11:58:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.643 Creating new GPT entries in memory. 00:03:48.643 The operation has completed successfully. 00:03:48.643 11:58:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.643 11:58:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.643 11:58:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.643 11:58:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.643 11:58:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:49.579 The operation has completed successfully. 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 848008 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.579 11:58:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.954 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.884 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.885 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:52.143 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:52.143 00:03:52.143 real 0m5.695s 00:03:52.143 user 0m0.930s 00:03:52.143 sys 0m1.598s 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.143 11:58:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:52.143 ************************************ 00:03:52.143 END TEST dm_mount 00:03:52.143 ************************************ 00:03:52.143 11:58:59 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.143 11:58:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.400 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:52.400 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:52.400 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.400 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.400 11:59:00 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.400 00:03:52.400 real 0m13.920s 00:03:52.400 user 0m3.022s 00:03:52.400 sys 0m5.091s 00:03:52.400 11:59:00 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.400 11:59:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.400 ************************************ 00:03:52.400 END TEST devices 00:03:52.400 ************************************ 00:03:52.400 11:59:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.400 00:03:52.400 real 0m43.471s 00:03:52.400 user 0m12.387s 00:03:52.400 sys 0m19.268s 00:03:52.400 11:59:00 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.400 11:59:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.400 ************************************ 00:03:52.400 END TEST setup.sh 00:03:52.400 ************************************ 00:03:52.400 11:59:00 -- common/autotest_common.sh@1142 -- # return 0 00:03:52.400 11:59:00 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.772 Hugepages 00:03:53.772 node hugesize free / total 00:03:53.772 node0 1048576kB 0 / 0 00:03:53.772 node0 2048kB 2048 / 2048 00:03:53.772 node1 1048576kB 0 / 0 00:03:53.772 node1 2048kB 0 / 0 00:03:53.772 00:03:53.772 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.772 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:53.772 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:53.772 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:53.772 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.772 11:59:01 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.772 11:59:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.772 11:59:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.772 11:59:01 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.706 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.706 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.706 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.964 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.899 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.899 11:59:03 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:56.835 11:59:04 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:56.835 11:59:04 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:56.835 11:59:04 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.835 11:59:04 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:56.835 11:59:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:56.835 11:59:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:56.835 11:59:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.835 11:59:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.835 11:59:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:57.094 11:59:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:57.094 11:59:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:57.094 11:59:04 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.042 Waiting for block devices as requested 00:03:58.042 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:58.301 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.301 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.301 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:58.558 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:58.558 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:58.558 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:58.558 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:58.558 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:58.815 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.815 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.815 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:59.072 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:59.072 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:59.072 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:59.072 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:59.329 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:59.329 11:59:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:59.329 11:59:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:59.329 11:59:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:59.329 11:59:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:59.329 11:59:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:59.329 11:59:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:59.329 11:59:07 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:59.329 11:59:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:59.329 11:59:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:59.329 11:59:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:59.329 11:59:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:59.329 11:59:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:59.329 11:59:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:59.329 11:59:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:59.329 11:59:07 -- common/autotest_common.sh@1557 -- # continue 00:03:59.329 11:59:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:59.329 11:59:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.329 11:59:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.329 11:59:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:59.329 11:59:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.329 11:59:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.329 11:59:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.700 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.700 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.700 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.631 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.631 11:59:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:01.631 11:59:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.631 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:04:01.888 11:59:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:01.888 11:59:09 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:01.888 11:59:09 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.888 11:59:09 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:01.888 11:59:09 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:01.888 11:59:09 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:01.888 11:59:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:01.888 11:59:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:01.888 11:59:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.888 11:59:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.888 11:59:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:01.888 11:59:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:01.888 11:59:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:01.888 11:59:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:01.888 11:59:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:01.888 11:59:09 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:01.888 11:59:09 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:01.888 11:59:09 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:01.888 11:59:09 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:01.888 11:59:09 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:01.888 11:59:09 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=853187 00:04:01.888 11:59:09 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.888 11:59:09 -- common/autotest_common.sh@1598 -- # waitforlisten 853187 00:04:01.888 11:59:09 -- common/autotest_common.sh@829 -- # '[' -z 853187 ']' 00:04:01.888 11:59:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.888 11:59:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.888 11:59:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.889 11:59:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.889 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:04:01.889 [2024-07-22 11:59:09.715479] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:01.889 [2024-07-22 11:59:09.715581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853187 ] 00:04:01.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.889 [2024-07-22 11:59:09.747820] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:01.889 [2024-07-22 11:59:09.779504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.146 [2024-07-22 11:59:09.871457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.403 11:59:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:02.403 11:59:10 -- common/autotest_common.sh@862 -- # return 0 00:04:02.403 11:59:10 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:02.403 11:59:10 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:02.403 11:59:10 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:05.682 nvme0n1 00:04:05.682 11:59:13 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:05.682 [2024-07-22 11:59:13.441829] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:05.682 [2024-07-22 11:59:13.441879] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:05.682 request: 00:04:05.682 { 00:04:05.682 "nvme_ctrlr_name": "nvme0", 00:04:05.682 "password": "test", 00:04:05.682 "method": "bdev_nvme_opal_revert", 00:04:05.682 "req_id": 1 00:04:05.682 } 00:04:05.682 Got JSON-RPC error response 00:04:05.682 response: 00:04:05.682 { 00:04:05.682 "code": -32603, 00:04:05.682 "message": "Internal error" 00:04:05.682 } 00:04:05.682 11:59:13 -- common/autotest_common.sh@1604 -- # true 00:04:05.682 11:59:13 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:05.682 11:59:13 -- common/autotest_common.sh@1608 -- # killprocess 853187 00:04:05.682 11:59:13 -- common/autotest_common.sh@948 -- # '[' -z 853187 ']' 00:04:05.682 11:59:13 -- common/autotest_common.sh@952 -- # kill -0 853187 00:04:05.682 11:59:13 -- common/autotest_common.sh@953 -- # uname 00:04:05.682 11:59:13 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.682 11:59:13 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 853187 00:04:05.682 11:59:13 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.682 11:59:13 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.682 11:59:13 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 853187' 00:04:05.682 killing process with pid 853187 00:04:05.682 11:59:13 -- common/autotest_common.sh@967 -- # kill 853187 00:04:05.682 11:59:13 -- common/autotest_common.sh@972 -- # wait 853187 00:04:07.580 11:59:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:07.580 11:59:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:07.580 11:59:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:07.580 11:59:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:07.580 11:59:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:07.580 11:59:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:07.580 11:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:07.580 11:59:15 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:07.580 11:59:15 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.580 11:59:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.580 11:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.580 11:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:07.580 ************************************ 00:04:07.580 START TEST env 00:04:07.580 ************************************ 00:04:07.580 11:59:15 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.580 * Looking for test storage... 00:04:07.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:07.580 11:59:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.580 11:59:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.580 11:59:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.580 11:59:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.580 ************************************ 00:04:07.580 START TEST env_memory 00:04:07.580 ************************************ 00:04:07.580 11:59:15 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.580 00:04:07.580 00:04:07.580 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.580 http://cunit.sourceforge.net/ 00:04:07.580 00:04:07.580 00:04:07.580 Suite: memory 00:04:07.580 Test: alloc and free memory map ...[2024-07-22 11:59:15.394338] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.580 passed 00:04:07.580 Test: mem map translation ...[2024-07-22 11:59:15.414752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.580 [2024-07-22 11:59:15.414775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.580 [2024-07-22 11:59:15.414826] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.580 [2024-07-22 11:59:15.414843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.580 passed 00:04:07.580 Test: mem map registration ...[2024-07-22 11:59:15.455868] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:07.580 [2024-07-22 11:59:15.455888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:07.580 passed 00:04:07.580 Test: mem map adjacent registrations ...passed 00:04:07.580 00:04:07.580 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.580 suites 1 1 n/a 0 0 00:04:07.580 tests 4 4 4 0 0 00:04:07.580 asserts 152 152 152 0 n/a 00:04:07.580 00:04:07.580 Elapsed time = 0.139 seconds 00:04:07.838 00:04:07.838 real 0m0.146s 00:04:07.838 user 0m0.136s 00:04:07.838 sys 0m0.009s 00:04:07.838 11:59:15 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.838 11:59:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.838 ************************************ 00:04:07.838 END TEST env_memory 00:04:07.838 ************************************ 00:04:07.838 11:59:15 env -- common/autotest_common.sh@1142 -- # return 0 00:04:07.838 11:59:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.838 11:59:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.838 11:59:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.838 11:59:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.838 ************************************ 00:04:07.838 START TEST env_vtophys 00:04:07.838 ************************************ 00:04:07.838 11:59:15 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.838 EAL: lib.eal log level changed from notice to debug 00:04:07.838 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.838 EAL: Detected lcore 1 as core 1 on socket 0 00:04:07.838 EAL: Detected lcore 2 as core 2 on socket 0 00:04:07.838 EAL: Detected lcore 3 as core 3 on socket 0 00:04:07.838 EAL: Detected lcore 4 as core 4 on socket 0 00:04:07.838 EAL: Detected lcore 5 as core 5 on socket 0 00:04:07.838 EAL: Detected lcore 6 as core 8 on socket 0 00:04:07.838 EAL: Detected lcore 7 as core 9 on socket 0 00:04:07.838 EAL: Detected lcore 8 as core 10 on socket 0 00:04:07.838 EAL: Detected lcore 9 as core 11 on socket 0 00:04:07.838 EAL: Detected lcore 10 as core 12 on socket 0 00:04:07.838 EAL: Detected lcore 11 as core 13 on socket 0 00:04:07.838 EAL: Detected lcore 12 as core 0 on socket 1 00:04:07.838 EAL: Detected lcore 13 as core 1 on socket 1 00:04:07.838 EAL: Detected lcore 14 as core 2 on socket 1 00:04:07.838 EAL: Detected lcore 15 as core 3 on socket 1 00:04:07.838 EAL: Detected lcore 16 as core 4 on socket 1 00:04:07.838 EAL: Detected lcore 17 as core 5 on socket 1 00:04:07.838 EAL: Detected lcore 18 as core 8 on socket 1 00:04:07.838 EAL: Detected lcore 19 as core 9 on socket 1 00:04:07.838 EAL: Detected lcore 20 as core 10 on socket 1 00:04:07.838 EAL: Detected lcore 21 as core 11 on socket 1 00:04:07.838 EAL: Detected lcore 22 as core 12 on socket 1 00:04:07.838 EAL: Detected lcore 23 as core 13 on socket 1 00:04:07.838 EAL: Detected lcore 24 as core 0 on socket 0 00:04:07.838 EAL: Detected lcore 25 as core 1 on socket 0 00:04:07.838 EAL: Detected lcore 26 as core 2 on socket 0 00:04:07.838 EAL: Detected lcore 27 as core 3 on socket 0 00:04:07.838 EAL: Detected lcore 28 as core 4 on socket 0 00:04:07.838 EAL: Detected lcore 29 as core 5 on socket 0 00:04:07.838 EAL: Detected lcore 30 as core 8 on socket 0 00:04:07.838 EAL: Detected lcore 31 as core 9 on socket 0 00:04:07.838 EAL: Detected lcore 32 as core 10 on socket 0 00:04:07.838 EAL: Detected lcore 33 as core 11 on socket 0 00:04:07.838 EAL: Detected lcore 34 as core 12 on socket 0 00:04:07.838 EAL: Detected lcore 35 as core 13 on socket 0 00:04:07.838 EAL: Detected lcore 36 as core 0 on socket 1 00:04:07.838 EAL: Detected lcore 37 as core 1 on socket 1 00:04:07.838 EAL: Detected lcore 38 as core 2 on socket 1 00:04:07.838 EAL: Detected lcore 39 as core 3 on socket 1 00:04:07.838 EAL: Detected lcore 40 as core 4 on socket 1 00:04:07.838 EAL: Detected lcore 41 as core 5 on socket 1 00:04:07.838 EAL: Detected lcore 42 as core 8 on socket 1 00:04:07.838 EAL: Detected lcore 43 as core 9 on socket 1 00:04:07.838 EAL: Detected lcore 44 as core 10 on socket 1 00:04:07.838 EAL: Detected lcore 45 as core 11 on socket 1 00:04:07.838 EAL: Detected lcore 46 as core 12 on socket 1 00:04:07.838 EAL: Detected lcore 47 as core 13 on socket 1 00:04:07.838 EAL: Maximum logical cores by configuration: 128 00:04:07.838 EAL: Detected CPU lcores: 48 00:04:07.838 EAL: Detected NUMA nodes: 2 00:04:07.838 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:07.838 EAL: Detected shared linkage of DPDK 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:07.838 EAL: Registered [vdev] bus. 00:04:07.838 EAL: bus.vdev log level changed from disabled to notice 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:07.838 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:07.838 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:07.838 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:07.838 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.838 EAL: No shared files mode enabled, IPC is disabled 00:04:07.838 EAL: Bus pci wants IOVA as 'DC' 00:04:07.838 EAL: Bus vdev wants IOVA as 'DC' 00:04:07.838 EAL: Buses did not request a specific IOVA mode. 00:04:07.838 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:07.838 EAL: Selected IOVA mode 'VA' 00:04:07.838 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.838 EAL: Probing VFIO support... 00:04:07.838 EAL: IOMMU type 1 (Type 1) is supported 00:04:07.838 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:07.838 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:07.838 EAL: VFIO support initialized 00:04:07.838 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.838 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.838 EAL: Setting up physically contiguous memory... 00:04:07.838 EAL: Setting maximum number of open files to 524288 00:04:07.838 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.838 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:07.838 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.838 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:07.838 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.838 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:07.838 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.838 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.838 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:07.838 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:07.838 EAL: Hugepages will be freed exactly as allocated. 00:04:07.838 EAL: No shared files mode enabled, IPC is disabled 00:04:07.838 EAL: No shared files mode enabled, IPC is disabled 00:04:07.838 EAL: TSC frequency is ~2700000 KHz 00:04:07.838 EAL: Main lcore 0 is ready (tid=7f71f4bc8a00;cpuset=[0]) 00:04:07.838 EAL: Trying to obtain current memory policy. 00:04:07.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.838 EAL: Restoring previous memory policy: 0 00:04:07.838 EAL: request: mp_malloc_sync 00:04:07.838 EAL: No shared files mode enabled, IPC is disabled 00:04:07.838 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.838 EAL: No shared files mode enabled, IPC is disabled 00:04:07.838 EAL: No shared files mode enabled, IPC is disabled 00:04:07.838 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.838 00:04:07.838 00:04:07.838 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.838 http://cunit.sourceforge.net/ 00:04:07.838 00:04:07.838 00:04:07.838 Suite: components_suite 00:04:07.838 Test: vtophys_malloc_test ...passed 00:04:07.838 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.838 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.839 EAL: Trying to obtain current memory policy. 00:04:07.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.839 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.839 EAL: Trying to obtain current memory policy. 00:04:07.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.839 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.839 EAL: Trying to obtain current memory policy. 00:04:07.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.839 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.839 EAL: Trying to obtain current memory policy. 00:04:07.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.839 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.839 EAL: Trying to obtain current memory policy. 00:04:07.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.839 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.839 EAL: Trying to obtain current memory policy. 00:04:07.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.839 EAL: Restoring previous memory policy: 4 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.839 EAL: request: mp_malloc_sync 00:04:07.839 EAL: No shared files mode enabled, IPC is disabled 00:04:07.839 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.096 EAL: request: mp_malloc_sync 00:04:08.096 EAL: No shared files mode enabled, IPC is disabled 00:04:08.096 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.096 EAL: Trying to obtain current memory policy. 00:04:08.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.096 EAL: Restoring previous memory policy: 4 00:04:08.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.096 EAL: request: mp_malloc_sync 00:04:08.096 EAL: No shared files mode enabled, IPC is disabled 00:04:08.096 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.097 EAL: request: mp_malloc_sync 00:04:08.097 EAL: No shared files mode enabled, IPC is disabled 00:04:08.097 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.097 EAL: Trying to obtain current memory policy. 00:04:08.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.356 EAL: Restoring previous memory policy: 4 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.614 EAL: request: mp_malloc_sync 00:04:08.614 EAL: No shared files mode enabled, IPC is disabled 00:04:08.614 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.614 EAL: Trying to obtain current memory policy. 00:04:08.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.910 EAL: Restoring previous memory policy: 4 00:04:08.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.910 EAL: request: mp_malloc_sync 00:04:08.910 EAL: No shared files mode enabled, IPC is disabled 00:04:08.910 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.910 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.167 EAL: request: mp_malloc_sync 00:04:09.167 EAL: No shared files mode enabled, IPC is disabled 00:04:09.167 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.167 passed 00:04:09.167 00:04:09.167 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.167 suites 1 1 n/a 0 0 00:04:09.167 tests 2 2 2 0 0 00:04:09.167 asserts 497 497 497 0 n/a 00:04:09.167 00:04:09.167 Elapsed time = 1.364 seconds 00:04:09.167 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.167 EAL: request: mp_malloc_sync 00:04:09.167 EAL: No shared files mode enabled, IPC is disabled 00:04:09.167 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.167 EAL: No shared files mode enabled, IPC is disabled 00:04:09.167 EAL: No shared files mode enabled, IPC is disabled 00:04:09.167 EAL: No shared files mode enabled, IPC is disabled 00:04:09.167 00:04:09.167 real 0m1.482s 00:04:09.167 user 0m0.842s 00:04:09.167 sys 0m0.604s 00:04:09.167 11:59:17 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.167 11:59:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.167 ************************************ 00:04:09.167 END TEST env_vtophys 00:04:09.167 ************************************ 00:04:09.167 11:59:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:09.167 11:59:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:09.167 11:59:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.167 11:59:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.167 11:59:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.167 ************************************ 00:04:09.167 START TEST env_pci 00:04:09.167 ************************************ 00:04:09.167 11:59:17 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:09.167 00:04:09.167 00:04:09.167 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.167 http://cunit.sourceforge.net/ 00:04:09.167 00:04:09.167 00:04:09.167 Suite: pci 00:04:09.167 Test: pci_hook ...[2024-07-22 11:59:17.088710] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 854082 has claimed it 00:04:09.425 EAL: Cannot find device (10000:00:01.0) 00:04:09.425 EAL: Failed to attach device on primary process 00:04:09.425 passed 00:04:09.425 00:04:09.425 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.425 suites 1 1 n/a 0 0 00:04:09.425 tests 1 1 1 0 0 00:04:09.425 asserts 25 25 25 0 n/a 00:04:09.425 00:04:09.425 Elapsed time = 0.021 seconds 00:04:09.425 00:04:09.425 real 0m0.033s 00:04:09.425 user 0m0.013s 00:04:09.425 sys 0m0.021s 00:04:09.425 11:59:17 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.425 11:59:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.425 ************************************ 00:04:09.425 END TEST env_pci 00:04:09.425 ************************************ 00:04:09.425 11:59:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:09.425 11:59:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.425 11:59:17 env -- env/env.sh@15 -- # uname 00:04:09.425 11:59:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.425 11:59:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.425 11:59:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.425 11:59:17 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:09.425 11:59:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.425 11:59:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.425 ************************************ 00:04:09.425 START TEST env_dpdk_post_init 00:04:09.425 ************************************ 00:04:09.425 11:59:17 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.425 EAL: Detected CPU lcores: 48 00:04:09.425 EAL: Detected NUMA nodes: 2 00:04:09.425 EAL: Detected shared linkage of DPDK 00:04:09.425 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.425 EAL: Selected IOVA mode 'VA' 00:04:09.425 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.425 EAL: VFIO support initialized 00:04:09.425 EAL: Using IOMMU type 1 (Type 1) 00:04:14.688 Starting DPDK initialization... 00:04:14.688 Starting SPDK post initialization... 00:04:14.688 SPDK NVMe probe 00:04:14.688 Attaching to 0000:88:00.0 00:04:14.688 Attached to 0000:88:00.0 00:04:14.688 Cleaning up... 00:04:14.688 00:04:14.688 real 0m4.397s 00:04:14.688 user 0m3.270s 00:04:14.688 sys 0m0.186s 00:04:14.688 11:59:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.688 11:59:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.688 ************************************ 00:04:14.688 END TEST env_dpdk_post_init 00:04:14.688 ************************************ 00:04:14.688 11:59:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.688 11:59:21 env -- env/env.sh@26 -- # uname 00:04:14.688 11:59:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.688 11:59:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.688 11:59:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.688 11:59:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.688 11:59:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.688 ************************************ 00:04:14.688 START TEST env_mem_callbacks 00:04:14.688 ************************************ 00:04:14.688 11:59:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.688 EAL: Detected CPU lcores: 48 00:04:14.688 EAL: Detected NUMA nodes: 2 00:04:14.688 EAL: Detected shared linkage of DPDK 00:04:14.688 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.688 EAL: Selected IOVA mode 'VA' 00:04:14.688 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.688 EAL: VFIO support initialized 00:04:14.688 00:04:14.688 00:04:14.688 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.688 http://cunit.sourceforge.net/ 00:04:14.688 00:04:14.688 00:04:14.688 Suite: memory 00:04:14.688 Test: test ... 00:04:14.688 register 0x200000200000 2097152 00:04:14.688 malloc 3145728 00:04:14.688 register 0x200000400000 4194304 00:04:14.688 buf 0x200000500000 len 3145728 PASSED 00:04:14.688 malloc 64 00:04:14.688 buf 0x2000004fff40 len 64 PASSED 00:04:14.688 malloc 4194304 00:04:14.688 register 0x200000800000 6291456 00:04:14.688 buf 0x200000a00000 len 4194304 PASSED 00:04:14.688 free 0x200000500000 3145728 00:04:14.688 free 0x2000004fff40 64 00:04:14.688 unregister 0x200000400000 4194304 PASSED 00:04:14.688 free 0x200000a00000 4194304 00:04:14.688 unregister 0x200000800000 6291456 PASSED 00:04:14.688 malloc 8388608 00:04:14.688 register 0x200000400000 10485760 00:04:14.688 buf 0x200000600000 len 8388608 PASSED 00:04:14.688 free 0x200000600000 8388608 00:04:14.688 unregister 0x200000400000 10485760 PASSED 00:04:14.688 passed 00:04:14.688 00:04:14.688 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.688 suites 1 1 n/a 0 0 00:04:14.688 tests 1 1 1 0 0 00:04:14.688 asserts 15 15 15 0 n/a 00:04:14.688 00:04:14.689 Elapsed time = 0.005 seconds 00:04:14.689 00:04:14.689 real 0m0.046s 00:04:14.689 user 0m0.006s 00:04:14.689 sys 0m0.040s 00:04:14.689 11:59:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.689 11:59:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 ************************************ 00:04:14.689 END TEST env_mem_callbacks 00:04:14.689 ************************************ 00:04:14.689 11:59:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.689 00:04:14.689 real 0m6.386s 00:04:14.689 user 0m4.368s 00:04:14.689 sys 0m1.059s 00:04:14.689 11:59:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.689 11:59:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 ************************************ 00:04:14.689 END TEST env 00:04:14.689 ************************************ 00:04:14.689 11:59:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.689 11:59:21 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.689 11:59:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.689 11:59:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.689 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 ************************************ 00:04:14.689 START TEST rpc 00:04:14.689 ************************************ 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.689 * Looking for test storage... 00:04:14.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.689 11:59:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=854744 00:04:14.689 11:59:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:14.689 11:59:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.689 11:59:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 854744 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 854744 ']' 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.689 11:59:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 [2024-07-22 11:59:21.824183] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:14.689 [2024-07-22 11:59:21.824263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854744 ] 00:04:14.689 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.689 [2024-07-22 11:59:21.857302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:14.689 [2024-07-22 11:59:21.884061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.689 [2024-07-22 11:59:21.968382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.689 [2024-07-22 11:59:21.968451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 854744' to capture a snapshot of events at runtime. 00:04:14.689 [2024-07-22 11:59:21.968475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.689 [2024-07-22 11:59:21.968485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.689 [2024-07-22 11:59:21.968495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid854744 for offline analysis/debug. 00:04:14.689 [2024-07-22 11:59:21.968522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.689 11:59:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.689 11:59:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:14.689 11:59:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.689 11:59:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.689 11:59:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.689 11:59:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.689 11:59:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.689 11:59:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.689 11:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 ************************************ 00:04:14.689 START TEST rpc_integrity 00:04:14.689 ************************************ 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.689 { 00:04:14.689 "name": "Malloc0", 00:04:14.689 "aliases": [ 00:04:14.689 "273f908a-2299-4b9d-8ad9-50067ad6d402" 00:04:14.689 ], 00:04:14.689 "product_name": "Malloc disk", 00:04:14.689 "block_size": 512, 00:04:14.689 "num_blocks": 16384, 00:04:14.689 "uuid": "273f908a-2299-4b9d-8ad9-50067ad6d402", 00:04:14.689 "assigned_rate_limits": { 00:04:14.689 "rw_ios_per_sec": 0, 00:04:14.689 "rw_mbytes_per_sec": 0, 00:04:14.689 "r_mbytes_per_sec": 0, 00:04:14.689 "w_mbytes_per_sec": 0 00:04:14.689 }, 00:04:14.689 "claimed": false, 00:04:14.689 "zoned": false, 00:04:14.689 "supported_io_types": { 00:04:14.689 "read": true, 00:04:14.689 "write": true, 00:04:14.689 "unmap": true, 00:04:14.689 "flush": true, 00:04:14.689 "reset": true, 00:04:14.689 "nvme_admin": false, 00:04:14.689 "nvme_io": false, 00:04:14.689 "nvme_io_md": false, 00:04:14.689 "write_zeroes": true, 00:04:14.689 "zcopy": true, 00:04:14.689 "get_zone_info": false, 00:04:14.689 "zone_management": false, 00:04:14.689 "zone_append": false, 00:04:14.689 "compare": false, 00:04:14.689 "compare_and_write": false, 00:04:14.689 "abort": true, 00:04:14.689 "seek_hole": false, 00:04:14.689 "seek_data": false, 00:04:14.689 "copy": true, 00:04:14.689 "nvme_iov_md": false 00:04:14.689 }, 00:04:14.689 "memory_domains": [ 00:04:14.689 { 00:04:14.689 "dma_device_id": "system", 00:04:14.689 "dma_device_type": 1 00:04:14.689 }, 00:04:14.689 { 00:04:14.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.689 "dma_device_type": 2 00:04:14.689 } 00:04:14.689 ], 00:04:14.689 "driver_specific": {} 00:04:14.689 } 00:04:14.689 ]' 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.689 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.689 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.689 [2024-07-22 11:59:22.350236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.689 [2024-07-22 11:59:22.350278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.689 [2024-07-22 11:59:22.350312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22127f0 00:04:14.689 [2024-07-22 11:59:22.350329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.689 [2024-07-22 11:59:22.351825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.690 [2024-07-22 11:59:22.351851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.690 Passthru0 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.690 { 00:04:14.690 "name": "Malloc0", 00:04:14.690 "aliases": [ 00:04:14.690 "273f908a-2299-4b9d-8ad9-50067ad6d402" 00:04:14.690 ], 00:04:14.690 "product_name": "Malloc disk", 00:04:14.690 "block_size": 512, 00:04:14.690 "num_blocks": 16384, 00:04:14.690 "uuid": "273f908a-2299-4b9d-8ad9-50067ad6d402", 00:04:14.690 "assigned_rate_limits": { 00:04:14.690 "rw_ios_per_sec": 0, 00:04:14.690 "rw_mbytes_per_sec": 0, 00:04:14.690 "r_mbytes_per_sec": 0, 00:04:14.690 "w_mbytes_per_sec": 0 00:04:14.690 }, 00:04:14.690 "claimed": true, 00:04:14.690 "claim_type": "exclusive_write", 00:04:14.690 "zoned": false, 00:04:14.690 "supported_io_types": { 00:04:14.690 "read": true, 00:04:14.690 "write": true, 00:04:14.690 "unmap": true, 00:04:14.690 "flush": true, 00:04:14.690 "reset": true, 00:04:14.690 "nvme_admin": false, 00:04:14.690 "nvme_io": false, 00:04:14.690 "nvme_io_md": false, 00:04:14.690 "write_zeroes": true, 00:04:14.690 "zcopy": true, 00:04:14.690 "get_zone_info": false, 00:04:14.690 "zone_management": false, 00:04:14.690 "zone_append": false, 00:04:14.690 "compare": false, 00:04:14.690 "compare_and_write": false, 00:04:14.690 "abort": true, 00:04:14.690 "seek_hole": false, 00:04:14.690 "seek_data": false, 00:04:14.690 "copy": true, 00:04:14.690 "nvme_iov_md": false 00:04:14.690 }, 00:04:14.690 "memory_domains": [ 00:04:14.690 { 00:04:14.690 "dma_device_id": "system", 00:04:14.690 "dma_device_type": 1 00:04:14.690 }, 00:04:14.690 { 00:04:14.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.690 "dma_device_type": 2 00:04:14.690 } 00:04:14.690 ], 00:04:14.690 "driver_specific": {} 00:04:14.690 }, 00:04:14.690 { 00:04:14.690 "name": "Passthru0", 00:04:14.690 "aliases": [ 00:04:14.690 "e9f19478-466c-5f29-98d0-a1a42886984c" 00:04:14.690 ], 00:04:14.690 "product_name": "passthru", 00:04:14.690 "block_size": 512, 00:04:14.690 "num_blocks": 16384, 00:04:14.690 "uuid": "e9f19478-466c-5f29-98d0-a1a42886984c", 00:04:14.690 "assigned_rate_limits": { 00:04:14.690 "rw_ios_per_sec": 0, 00:04:14.690 "rw_mbytes_per_sec": 0, 00:04:14.690 "r_mbytes_per_sec": 0, 00:04:14.690 "w_mbytes_per_sec": 0 00:04:14.690 }, 00:04:14.690 "claimed": false, 00:04:14.690 "zoned": false, 00:04:14.690 "supported_io_types": { 00:04:14.690 "read": true, 00:04:14.690 "write": true, 00:04:14.690 "unmap": true, 00:04:14.690 "flush": true, 00:04:14.690 "reset": true, 00:04:14.690 "nvme_admin": false, 00:04:14.690 "nvme_io": false, 00:04:14.690 "nvme_io_md": false, 00:04:14.690 "write_zeroes": true, 00:04:14.690 "zcopy": true, 00:04:14.690 "get_zone_info": false, 00:04:14.690 "zone_management": false, 00:04:14.690 "zone_append": false, 00:04:14.690 "compare": false, 00:04:14.690 "compare_and_write": false, 00:04:14.690 "abort": true, 00:04:14.690 "seek_hole": false, 00:04:14.690 "seek_data": false, 00:04:14.690 "copy": true, 00:04:14.690 "nvme_iov_md": false 00:04:14.690 }, 00:04:14.690 "memory_domains": [ 00:04:14.690 { 00:04:14.690 "dma_device_id": "system", 00:04:14.690 "dma_device_type": 1 00:04:14.690 }, 00:04:14.690 { 00:04:14.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.690 "dma_device_type": 2 00:04:14.690 } 00:04:14.690 ], 00:04:14.690 "driver_specific": { 00:04:14.690 "passthru": { 00:04:14.690 "name": "Passthru0", 00:04:14.690 "base_bdev_name": "Malloc0" 00:04:14.690 } 00:04:14.690 } 00:04:14.690 } 00:04:14.690 ]' 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.690 11:59:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.690 00:04:14.690 real 0m0.226s 00:04:14.690 user 0m0.154s 00:04:14.690 sys 0m0.017s 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 ************************************ 00:04:14.690 END TEST rpc_integrity 00:04:14.690 ************************************ 00:04:14.690 11:59:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.690 11:59:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.690 11:59:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.690 11:59:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.690 11:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 ************************************ 00:04:14.690 START TEST rpc_plugins 00:04:14.690 ************************************ 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:14.690 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.690 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.690 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.690 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.690 { 00:04:14.690 "name": "Malloc1", 00:04:14.690 "aliases": [ 00:04:14.690 "b5375bc5-5429-499f-9441-5fd5385bfea0" 00:04:14.690 ], 00:04:14.690 "product_name": "Malloc disk", 00:04:14.690 "block_size": 4096, 00:04:14.690 "num_blocks": 256, 00:04:14.690 "uuid": "b5375bc5-5429-499f-9441-5fd5385bfea0", 00:04:14.690 "assigned_rate_limits": { 00:04:14.690 "rw_ios_per_sec": 0, 00:04:14.690 "rw_mbytes_per_sec": 0, 00:04:14.690 "r_mbytes_per_sec": 0, 00:04:14.690 "w_mbytes_per_sec": 0 00:04:14.690 }, 00:04:14.690 "claimed": false, 00:04:14.690 "zoned": false, 00:04:14.690 "supported_io_types": { 00:04:14.690 "read": true, 00:04:14.690 "write": true, 00:04:14.690 "unmap": true, 00:04:14.690 "flush": true, 00:04:14.690 "reset": true, 00:04:14.690 "nvme_admin": false, 00:04:14.690 "nvme_io": false, 00:04:14.690 "nvme_io_md": false, 00:04:14.690 "write_zeroes": true, 00:04:14.690 "zcopy": true, 00:04:14.690 "get_zone_info": false, 00:04:14.690 "zone_management": false, 00:04:14.690 "zone_append": false, 00:04:14.690 "compare": false, 00:04:14.690 "compare_and_write": false, 00:04:14.690 "abort": true, 00:04:14.690 "seek_hole": false, 00:04:14.690 "seek_data": false, 00:04:14.690 "copy": true, 00:04:14.690 "nvme_iov_md": false 00:04:14.691 }, 00:04:14.691 "memory_domains": [ 00:04:14.691 { 00:04:14.691 "dma_device_id": "system", 00:04:14.691 "dma_device_type": 1 00:04:14.691 }, 00:04:14.691 { 00:04:14.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.691 "dma_device_type": 2 00:04:14.691 } 00:04:14.691 ], 00:04:14.691 "driver_specific": {} 00:04:14.691 } 00:04:14.691 ]' 00:04:14.691 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.691 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.691 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.691 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.691 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.691 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.691 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.691 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.691 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.691 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.691 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.691 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.949 11:59:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.949 00:04:14.949 real 0m0.116s 00:04:14.949 user 0m0.073s 00:04:14.949 sys 0m0.012s 00:04:14.949 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.949 11:59:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 ************************************ 00:04:14.949 END TEST rpc_plugins 00:04:14.949 ************************************ 00:04:14.949 11:59:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.949 11:59:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.949 11:59:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.949 11:59:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.949 11:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 ************************************ 00:04:14.949 START TEST rpc_trace_cmd_test 00:04:14.949 ************************************ 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.949 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid854744", 00:04:14.949 "tpoint_group_mask": "0x8", 00:04:14.949 "iscsi_conn": { 00:04:14.949 "mask": "0x2", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "scsi": { 00:04:14.949 "mask": "0x4", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "bdev": { 00:04:14.949 "mask": "0x8", 00:04:14.949 "tpoint_mask": "0xffffffffffffffff" 00:04:14.949 }, 00:04:14.949 "nvmf_rdma": { 00:04:14.949 "mask": "0x10", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "nvmf_tcp": { 00:04:14.949 "mask": "0x20", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "ftl": { 00:04:14.949 "mask": "0x40", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "blobfs": { 00:04:14.949 "mask": "0x80", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "dsa": { 00:04:14.949 "mask": "0x200", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "thread": { 00:04:14.949 "mask": "0x400", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "nvme_pcie": { 00:04:14.949 "mask": "0x800", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "iaa": { 00:04:14.949 "mask": "0x1000", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "nvme_tcp": { 00:04:14.949 "mask": "0x2000", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "bdev_nvme": { 00:04:14.949 "mask": "0x4000", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 }, 00:04:14.949 "sock": { 00:04:14.949 "mask": "0x8000", 00:04:14.949 "tpoint_mask": "0x0" 00:04:14.949 } 00:04:14.949 }' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.949 00:04:14.949 real 0m0.195s 00:04:14.949 user 0m0.173s 00:04:14.949 sys 0m0.016s 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.949 11:59:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 ************************************ 00:04:14.949 END TEST rpc_trace_cmd_test 00:04:14.949 ************************************ 00:04:15.209 11:59:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.209 11:59:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.209 11:59:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.209 11:59:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.209 11:59:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.209 11:59:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.209 11:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.209 ************************************ 00:04:15.209 START TEST rpc_daemon_integrity 00:04:15.209 ************************************ 00:04:15.209 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:15.209 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.209 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.209 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.209 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.209 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.210 { 00:04:15.210 "name": "Malloc2", 00:04:15.210 "aliases": [ 00:04:15.210 "951209e2-9577-48e4-abeb-7c2b1dd326f3" 00:04:15.210 ], 00:04:15.210 "product_name": "Malloc disk", 00:04:15.210 "block_size": 512, 00:04:15.210 "num_blocks": 16384, 00:04:15.210 "uuid": "951209e2-9577-48e4-abeb-7c2b1dd326f3", 00:04:15.210 "assigned_rate_limits": { 00:04:15.210 "rw_ios_per_sec": 0, 00:04:15.210 "rw_mbytes_per_sec": 0, 00:04:15.210 "r_mbytes_per_sec": 0, 00:04:15.210 "w_mbytes_per_sec": 0 00:04:15.210 }, 00:04:15.210 "claimed": false, 00:04:15.210 "zoned": false, 00:04:15.210 "supported_io_types": { 00:04:15.210 "read": true, 00:04:15.210 "write": true, 00:04:15.210 "unmap": true, 00:04:15.210 "flush": true, 00:04:15.210 "reset": true, 00:04:15.210 "nvme_admin": false, 00:04:15.210 "nvme_io": false, 00:04:15.210 "nvme_io_md": false, 00:04:15.210 "write_zeroes": true, 00:04:15.210 "zcopy": true, 00:04:15.210 "get_zone_info": false, 00:04:15.210 "zone_management": false, 00:04:15.210 "zone_append": false, 00:04:15.210 "compare": false, 00:04:15.210 "compare_and_write": false, 00:04:15.210 "abort": true, 00:04:15.210 "seek_hole": false, 00:04:15.210 "seek_data": false, 00:04:15.210 "copy": true, 00:04:15.210 "nvme_iov_md": false 00:04:15.210 }, 00:04:15.210 "memory_domains": [ 00:04:15.210 { 00:04:15.210 "dma_device_id": "system", 00:04:15.210 "dma_device_type": 1 00:04:15.210 }, 00:04:15.210 { 00:04:15.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.210 "dma_device_type": 2 00:04:15.210 } 00:04:15.210 ], 00:04:15.210 "driver_specific": {} 00:04:15.210 } 00:04:15.210 ]' 00:04:15.210 11:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.210 [2024-07-22 11:59:23.032212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.210 [2024-07-22 11:59:23.032255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.210 [2024-07-22 11:59:23.032279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b6490 00:04:15.210 [2024-07-22 11:59:23.032295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.210 [2024-07-22 11:59:23.033609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.210 [2024-07-22 11:59:23.033647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.210 Passthru0 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.210 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.210 { 00:04:15.210 "name": "Malloc2", 00:04:15.210 "aliases": [ 00:04:15.210 "951209e2-9577-48e4-abeb-7c2b1dd326f3" 00:04:15.210 ], 00:04:15.210 "product_name": "Malloc disk", 00:04:15.210 "block_size": 512, 00:04:15.210 "num_blocks": 16384, 00:04:15.210 "uuid": "951209e2-9577-48e4-abeb-7c2b1dd326f3", 00:04:15.210 "assigned_rate_limits": { 00:04:15.210 "rw_ios_per_sec": 0, 00:04:15.210 "rw_mbytes_per_sec": 0, 00:04:15.210 "r_mbytes_per_sec": 0, 00:04:15.210 "w_mbytes_per_sec": 0 00:04:15.210 }, 00:04:15.210 "claimed": true, 00:04:15.210 "claim_type": "exclusive_write", 00:04:15.210 "zoned": false, 00:04:15.210 "supported_io_types": { 00:04:15.210 "read": true, 00:04:15.210 "write": true, 00:04:15.210 "unmap": true, 00:04:15.210 "flush": true, 00:04:15.210 "reset": true, 00:04:15.210 "nvme_admin": false, 00:04:15.210 "nvme_io": false, 00:04:15.210 "nvme_io_md": false, 00:04:15.210 "write_zeroes": true, 00:04:15.210 "zcopy": true, 00:04:15.210 "get_zone_info": false, 00:04:15.210 "zone_management": false, 00:04:15.210 "zone_append": false, 00:04:15.210 "compare": false, 00:04:15.210 "compare_and_write": false, 00:04:15.210 "abort": true, 00:04:15.210 "seek_hole": false, 00:04:15.210 "seek_data": false, 00:04:15.210 "copy": true, 00:04:15.210 "nvme_iov_md": false 00:04:15.210 }, 00:04:15.210 "memory_domains": [ 00:04:15.210 { 00:04:15.210 "dma_device_id": "system", 00:04:15.210 "dma_device_type": 1 00:04:15.210 }, 00:04:15.210 { 00:04:15.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.210 "dma_device_type": 2 00:04:15.210 } 00:04:15.210 ], 00:04:15.210 "driver_specific": {} 00:04:15.210 }, 00:04:15.210 { 00:04:15.210 "name": "Passthru0", 00:04:15.210 "aliases": [ 00:04:15.210 "923a5f19-4af7-542b-b4fc-da593e2aef87" 00:04:15.210 ], 00:04:15.210 "product_name": "passthru", 00:04:15.210 "block_size": 512, 00:04:15.210 "num_blocks": 16384, 00:04:15.210 "uuid": "923a5f19-4af7-542b-b4fc-da593e2aef87", 00:04:15.210 "assigned_rate_limits": { 00:04:15.210 "rw_ios_per_sec": 0, 00:04:15.210 "rw_mbytes_per_sec": 0, 00:04:15.211 "r_mbytes_per_sec": 0, 00:04:15.211 "w_mbytes_per_sec": 0 00:04:15.211 }, 00:04:15.211 "claimed": false, 00:04:15.211 "zoned": false, 00:04:15.211 "supported_io_types": { 00:04:15.211 "read": true, 00:04:15.211 "write": true, 00:04:15.211 "unmap": true, 00:04:15.211 "flush": true, 00:04:15.211 "reset": true, 00:04:15.211 "nvme_admin": false, 00:04:15.211 "nvme_io": false, 00:04:15.211 "nvme_io_md": false, 00:04:15.211 "write_zeroes": true, 00:04:15.211 "zcopy": true, 00:04:15.211 "get_zone_info": false, 00:04:15.211 "zone_management": false, 00:04:15.211 "zone_append": false, 00:04:15.211 "compare": false, 00:04:15.211 "compare_and_write": false, 00:04:15.211 "abort": true, 00:04:15.211 "seek_hole": false, 00:04:15.211 "seek_data": false, 00:04:15.211 "copy": true, 00:04:15.211 "nvme_iov_md": false 00:04:15.211 }, 00:04:15.211 "memory_domains": [ 00:04:15.211 { 00:04:15.211 "dma_device_id": "system", 00:04:15.211 "dma_device_type": 1 00:04:15.211 }, 00:04:15.211 { 00:04:15.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.211 "dma_device_type": 2 00:04:15.211 } 00:04:15.211 ], 00:04:15.211 "driver_specific": { 00:04:15.211 "passthru": { 00:04:15.211 "name": "Passthru0", 00:04:15.211 "base_bdev_name": "Malloc2" 00:04:15.211 } 00:04:15.211 } 00:04:15.211 } 00:04:15.211 ]' 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.211 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.469 11:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.469 00:04:15.469 real 0m0.232s 00:04:15.469 user 0m0.153s 00:04:15.469 sys 0m0.020s 00:04:15.469 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.469 11:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.469 ************************************ 00:04:15.469 END TEST rpc_daemon_integrity 00:04:15.469 ************************************ 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.469 11:59:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.469 11:59:23 rpc -- rpc/rpc.sh@84 -- # killprocess 854744 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 854744 ']' 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@952 -- # kill -0 854744 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@953 -- # uname 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 854744 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 854744' 00:04:15.469 killing process with pid 854744 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@967 -- # kill 854744 00:04:15.469 11:59:23 rpc -- common/autotest_common.sh@972 -- # wait 854744 00:04:15.727 00:04:15.727 real 0m1.872s 00:04:15.727 user 0m2.366s 00:04:15.727 sys 0m0.587s 00:04:15.727 11:59:23 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.727 11:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.727 ************************************ 00:04:15.727 END TEST rpc 00:04:15.727 ************************************ 00:04:15.727 11:59:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.727 11:59:23 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.727 11:59:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.727 11:59:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.727 11:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:15.727 ************************************ 00:04:15.727 START TEST skip_rpc 00:04:15.727 ************************************ 00:04:15.727 11:59:23 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:16.011 * Looking for test storage... 00:04:16.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.011 11:59:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.011 11:59:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.011 11:59:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.011 11:59:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.011 11:59:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.011 11:59:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.011 ************************************ 00:04:16.011 START TEST skip_rpc 00:04:16.011 ************************************ 00:04:16.011 11:59:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:16.011 11:59:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=855170 00:04:16.011 11:59:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.011 11:59:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.011 11:59:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.011 [2024-07-22 11:59:23.775905] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:16.011 [2024-07-22 11:59:23.775966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855170 ] 00:04:16.011 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.011 [2024-07-22 11:59:23.805810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:16.011 [2024-07-22 11:59:23.835683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.269 [2024-07-22 11:59:23.926932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 855170 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 855170 ']' 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 855170 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855170 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855170' 00:04:21.547 killing process with pid 855170 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 855170 00:04:21.547 11:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 855170 00:04:21.547 00:04:21.547 real 0m5.419s 00:04:21.547 user 0m5.107s 00:04:21.547 sys 0m0.316s 00:04:21.547 11:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.547 11:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.547 ************************************ 00:04:21.547 END TEST skip_rpc 00:04:21.548 ************************************ 00:04:21.548 11:59:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.548 11:59:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.548 11:59:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.548 11:59:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.548 11:59:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.548 ************************************ 00:04:21.548 START TEST skip_rpc_with_json 00:04:21.548 ************************************ 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=855867 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 855867 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 855867 ']' 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.548 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.548 [2024-07-22 11:59:29.246587] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:21.548 [2024-07-22 11:59:29.246692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855867 ] 00:04:21.548 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.548 [2024-07-22 11:59:29.277911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:21.548 [2024-07-22 11:59:29.309626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.548 [2024-07-22 11:59:29.398506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.806 [2024-07-22 11:59:29.661099] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.806 request: 00:04:21.806 { 00:04:21.806 "trtype": "tcp", 00:04:21.806 "method": "nvmf_get_transports", 00:04:21.806 "req_id": 1 00:04:21.806 } 00:04:21.806 Got JSON-RPC error response 00:04:21.806 response: 00:04:21.806 { 00:04:21.806 "code": -19, 00:04:21.806 "message": "No such device" 00:04:21.806 } 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.806 [2024-07-22 11:59:29.669222] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.806 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.064 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.064 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.064 { 00:04:22.064 "subsystems": [ 00:04:22.064 { 00:04:22.064 "subsystem": "vfio_user_target", 00:04:22.064 "config": null 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "keyring", 00:04:22.064 "config": [] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "iobuf", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.064 "method": "iobuf_set_options", 00:04:22.064 "params": { 00:04:22.064 "small_pool_count": 8192, 00:04:22.064 "large_pool_count": 1024, 00:04:22.064 "small_bufsize": 8192, 00:04:22.064 "large_bufsize": 135168 00:04:22.064 } 00:04:22.064 } 00:04:22.064 ] 00:04:22.064 }, 00:04:22.064 { 00:04:22.064 "subsystem": "sock", 00:04:22.064 "config": [ 00:04:22.064 { 00:04:22.065 "method": "sock_set_default_impl", 00:04:22.065 "params": { 00:04:22.065 "impl_name": "posix" 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "sock_impl_set_options", 00:04:22.065 "params": { 00:04:22.065 "impl_name": "ssl", 00:04:22.065 "recv_buf_size": 4096, 00:04:22.065 "send_buf_size": 4096, 00:04:22.065 "enable_recv_pipe": true, 00:04:22.065 "enable_quickack": false, 00:04:22.065 "enable_placement_id": 0, 00:04:22.065 "enable_zerocopy_send_server": true, 00:04:22.065 "enable_zerocopy_send_client": false, 00:04:22.065 "zerocopy_threshold": 0, 00:04:22.065 "tls_version": 0, 00:04:22.065 "enable_ktls": false 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "sock_impl_set_options", 00:04:22.065 "params": { 00:04:22.065 "impl_name": "posix", 00:04:22.065 "recv_buf_size": 2097152, 00:04:22.065 "send_buf_size": 2097152, 00:04:22.065 "enable_recv_pipe": true, 00:04:22.065 "enable_quickack": false, 00:04:22.065 "enable_placement_id": 0, 00:04:22.065 "enable_zerocopy_send_server": true, 00:04:22.065 "enable_zerocopy_send_client": false, 00:04:22.065 "zerocopy_threshold": 0, 00:04:22.065 "tls_version": 0, 00:04:22.065 "enable_ktls": false 00:04:22.065 } 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "vmd", 00:04:22.065 "config": [] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "accel", 00:04:22.065 "config": [ 00:04:22.065 { 00:04:22.065 "method": "accel_set_options", 00:04:22.065 "params": { 00:04:22.065 "small_cache_size": 128, 00:04:22.065 "large_cache_size": 16, 00:04:22.065 "task_count": 2048, 00:04:22.065 "sequence_count": 2048, 00:04:22.065 "buf_count": 2048 00:04:22.065 } 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "bdev", 00:04:22.065 "config": [ 00:04:22.065 { 00:04:22.065 "method": "bdev_set_options", 00:04:22.065 "params": { 00:04:22.065 "bdev_io_pool_size": 65535, 00:04:22.065 "bdev_io_cache_size": 256, 00:04:22.065 "bdev_auto_examine": true, 00:04:22.065 "iobuf_small_cache_size": 128, 00:04:22.065 "iobuf_large_cache_size": 16 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "bdev_raid_set_options", 00:04:22.065 "params": { 00:04:22.065 "process_window_size_kb": 1024, 00:04:22.065 "process_max_bandwidth_mb_sec": 0 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "bdev_iscsi_set_options", 00:04:22.065 "params": { 00:04:22.065 "timeout_sec": 30 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "bdev_nvme_set_options", 00:04:22.065 "params": { 00:04:22.065 "action_on_timeout": "none", 00:04:22.065 "timeout_us": 0, 00:04:22.065 "timeout_admin_us": 0, 00:04:22.065 "keep_alive_timeout_ms": 10000, 00:04:22.065 "arbitration_burst": 0, 00:04:22.065 "low_priority_weight": 0, 00:04:22.065 "medium_priority_weight": 0, 00:04:22.065 "high_priority_weight": 0, 00:04:22.065 "nvme_adminq_poll_period_us": 10000, 00:04:22.065 "nvme_ioq_poll_period_us": 0, 00:04:22.065 "io_queue_requests": 0, 00:04:22.065 "delay_cmd_submit": true, 00:04:22.065 "transport_retry_count": 4, 00:04:22.065 "bdev_retry_count": 3, 00:04:22.065 "transport_ack_timeout": 0, 00:04:22.065 "ctrlr_loss_timeout_sec": 0, 00:04:22.065 "reconnect_delay_sec": 0, 00:04:22.065 "fast_io_fail_timeout_sec": 0, 00:04:22.065 "disable_auto_failback": false, 00:04:22.065 "generate_uuids": false, 00:04:22.065 "transport_tos": 0, 00:04:22.065 "nvme_error_stat": false, 00:04:22.065 "rdma_srq_size": 0, 00:04:22.065 "io_path_stat": false, 00:04:22.065 "allow_accel_sequence": false, 00:04:22.065 "rdma_max_cq_size": 0, 00:04:22.065 "rdma_cm_event_timeout_ms": 0, 00:04:22.065 "dhchap_digests": [ 00:04:22.065 "sha256", 00:04:22.065 "sha384", 00:04:22.065 "sha512" 00:04:22.065 ], 00:04:22.065 "dhchap_dhgroups": [ 00:04:22.065 "null", 00:04:22.065 "ffdhe2048", 00:04:22.065 "ffdhe3072", 00:04:22.065 "ffdhe4096", 00:04:22.065 "ffdhe6144", 00:04:22.065 "ffdhe8192" 00:04:22.065 ] 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "bdev_nvme_set_hotplug", 00:04:22.065 "params": { 00:04:22.065 "period_us": 100000, 00:04:22.065 "enable": false 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "bdev_wait_for_examine" 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "scsi", 00:04:22.065 "config": null 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "scheduler", 00:04:22.065 "config": [ 00:04:22.065 { 00:04:22.065 "method": "framework_set_scheduler", 00:04:22.065 "params": { 00:04:22.065 "name": "static" 00:04:22.065 } 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "vhost_scsi", 00:04:22.065 "config": [] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "vhost_blk", 00:04:22.065 "config": [] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "ublk", 00:04:22.065 "config": [] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "nbd", 00:04:22.065 "config": [] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "nvmf", 00:04:22.065 "config": [ 00:04:22.065 { 00:04:22.065 "method": "nvmf_set_config", 00:04:22.065 "params": { 00:04:22.065 "discovery_filter": "match_any", 00:04:22.065 "admin_cmd_passthru": { 00:04:22.065 "identify_ctrlr": false 00:04:22.065 } 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "nvmf_set_max_subsystems", 00:04:22.065 "params": { 00:04:22.065 "max_subsystems": 1024 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "nvmf_set_crdt", 00:04:22.065 "params": { 00:04:22.065 "crdt1": 0, 00:04:22.065 "crdt2": 0, 00:04:22.065 "crdt3": 0 00:04:22.065 } 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "method": "nvmf_create_transport", 00:04:22.065 "params": { 00:04:22.065 "trtype": "TCP", 00:04:22.065 "max_queue_depth": 128, 00:04:22.065 "max_io_qpairs_per_ctrlr": 127, 00:04:22.065 "in_capsule_data_size": 4096, 00:04:22.065 "max_io_size": 131072, 00:04:22.065 "io_unit_size": 131072, 00:04:22.065 "max_aq_depth": 128, 00:04:22.065 "num_shared_buffers": 511, 00:04:22.065 "buf_cache_size": 4294967295, 00:04:22.065 "dif_insert_or_strip": false, 00:04:22.065 "zcopy": false, 00:04:22.065 "c2h_success": true, 00:04:22.065 "sock_priority": 0, 00:04:22.065 "abort_timeout_sec": 1, 00:04:22.065 "ack_timeout": 0, 00:04:22.065 "data_wr_pool_size": 0 00:04:22.065 } 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 }, 00:04:22.065 { 00:04:22.065 "subsystem": "iscsi", 00:04:22.065 "config": [ 00:04:22.065 { 00:04:22.065 "method": "iscsi_set_options", 00:04:22.065 "params": { 00:04:22.065 "node_base": "iqn.2016-06.io.spdk", 00:04:22.065 "max_sessions": 128, 00:04:22.065 "max_connections_per_session": 2, 00:04:22.065 "max_queue_depth": 64, 00:04:22.065 "default_time2wait": 2, 00:04:22.065 "default_time2retain": 20, 00:04:22.065 "first_burst_length": 8192, 00:04:22.065 "immediate_data": true, 00:04:22.065 "allow_duplicated_isid": false, 00:04:22.065 "error_recovery_level": 0, 00:04:22.065 "nop_timeout": 60, 00:04:22.065 "nop_in_interval": 30, 00:04:22.065 "disable_chap": false, 00:04:22.065 "require_chap": false, 00:04:22.065 "mutual_chap": false, 00:04:22.065 "chap_group": 0, 00:04:22.065 "max_large_datain_per_connection": 64, 00:04:22.065 "max_r2t_per_connection": 4, 00:04:22.065 "pdu_pool_size": 36864, 00:04:22.065 "immediate_data_pool_size": 16384, 00:04:22.065 "data_out_pool_size": 2048 00:04:22.065 } 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 } 00:04:22.065 ] 00:04:22.065 } 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 855867 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 855867 ']' 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 855867 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855867 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855867' 00:04:22.065 killing process with pid 855867 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 855867 00:04:22.065 11:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 855867 00:04:22.331 11:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=856007 00:04:22.331 11:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.331 11:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 856007 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 856007 ']' 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 856007 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856007 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856007' 00:04:27.587 killing process with pid 856007 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 856007 00:04:27.587 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 856007 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.845 00:04:27.845 real 0m6.480s 00:04:27.845 user 0m6.060s 00:04:27.845 sys 0m0.700s 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.845 ************************************ 00:04:27.845 END TEST skip_rpc_with_json 00:04:27.845 ************************************ 00:04:27.845 11:59:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:27.845 11:59:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:27.845 11:59:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.845 11:59:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.845 11:59:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.845 ************************************ 00:04:27.845 START TEST skip_rpc_with_delay 00:04:27.845 ************************************ 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.845 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.104 [2024-07-22 11:59:35.784052] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.104 [2024-07-22 11:59:35.784170] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:28.104 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:28.104 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:28.104 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:28.104 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:28.104 00:04:28.104 real 0m0.072s 00:04:28.104 user 0m0.046s 00:04:28.104 sys 0m0.026s 00:04:28.104 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.104 11:59:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.104 ************************************ 00:04:28.104 END TEST skip_rpc_with_delay 00:04:28.104 ************************************ 00:04:28.104 11:59:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.104 11:59:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.104 11:59:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.104 11:59:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.104 11:59:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.104 11:59:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.104 11:59:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.104 ************************************ 00:04:28.104 START TEST exit_on_failed_rpc_init 00:04:28.104 ************************************ 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=856719 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 856719 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 856719 ']' 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.104 11:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.104 [2024-07-22 11:59:35.904054] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:28.104 [2024-07-22 11:59:35.904138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856719 ] 00:04:28.104 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.104 [2024-07-22 11:59:35.935059] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:28.104 [2024-07-22 11:59:35.966488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.362 [2024-07-22 11:59:36.057171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.620 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.620 [2024-07-22 11:59:36.370016] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:28.620 [2024-07-22 11:59:36.370091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856731 ] 00:04:28.620 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.620 [2024-07-22 11:59:36.399571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:28.620 [2024-07-22 11:59:36.431125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.620 [2024-07-22 11:59:36.527021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.620 [2024-07-22 11:59:36.527125] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:28.620 [2024-07-22 11:59:36.527147] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:28.620 [2024-07-22 11:59:36.527160] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 856719 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 856719 ']' 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 856719 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856719 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856719' 00:04:28.877 killing process with pid 856719 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 856719 00:04:28.877 11:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 856719 00:04:29.136 00:04:29.136 real 0m1.187s 00:04:29.136 user 0m1.273s 00:04:29.136 sys 0m0.463s 00:04:29.136 11:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.136 11:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.136 ************************************ 00:04:29.136 END TEST exit_on_failed_rpc_init 00:04:29.136 ************************************ 00:04:29.136 11:59:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:29.136 11:59:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.136 00:04:29.136 real 0m13.420s 00:04:29.136 user 0m12.598s 00:04:29.136 sys 0m1.669s 00:04:29.136 11:59:37 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.136 11:59:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.136 ************************************ 00:04:29.136 END TEST skip_rpc 00:04:29.136 ************************************ 00:04:29.395 11:59:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.395 11:59:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.395 11:59:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.395 11:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.395 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 ************************************ 00:04:29.395 START TEST rpc_client 00:04:29.395 ************************************ 00:04:29.395 11:59:37 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.395 * Looking for test storage... 00:04:29.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:29.395 11:59:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:29.395 OK 00:04:29.395 11:59:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.395 00:04:29.395 real 0m0.066s 00:04:29.395 user 0m0.027s 00:04:29.395 sys 0m0.044s 00:04:29.395 11:59:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.395 11:59:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 ************************************ 00:04:29.395 END TEST rpc_client 00:04:29.395 ************************************ 00:04:29.395 11:59:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.395 11:59:37 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:29.395 11:59:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.395 11:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.395 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 ************************************ 00:04:29.395 START TEST json_config 00:04:29.395 ************************************ 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:29.395 11:59:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.395 11:59:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.395 11:59:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.395 11:59:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.395 11:59:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.395 11:59:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.395 11:59:37 json_config -- paths/export.sh@5 -- # export PATH 00:04:29.395 11:59:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@47 -- # : 0 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:29.395 11:59:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:29.395 INFO: JSON configuration test init 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 11:59:37 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:29.395 11:59:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:29.395 11:59:37 json_config -- json_config/common.sh@10 -- # shift 00:04:29.395 11:59:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:29.395 11:59:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:29.395 11:59:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:29.395 11:59:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.395 11:59:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.395 11:59:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=856973 00:04:29.395 11:59:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:29.395 11:59:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:29.395 Waiting for target to run... 00:04:29.395 11:59:37 json_config -- json_config/common.sh@25 -- # waitforlisten 856973 /var/tmp/spdk_tgt.sock 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 856973 ']' 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.395 11:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.652 [2024-07-22 11:59:37.330478] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:29.652 [2024-07-22 11:59:37.330562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856973 ] 00:04:29.652 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.910 [2024-07-22 11:59:37.788731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:29.910 [2024-07-22 11:59:37.822739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.168 [2024-07-22 11:59:37.904651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.424 11:59:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.424 11:59:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:30.424 11:59:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.424 00:04:30.424 11:59:38 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:30.424 11:59:38 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:30.424 11:59:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.424 11:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.424 11:59:38 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:30.424 11:59:38 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:30.424 11:59:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.424 11:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.424 11:59:38 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:30.424 11:59:38 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:30.424 11:59:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:33.699 11:59:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.699 11:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:33.699 11:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:33.699 11:59:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@51 -- # sort 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:33.956 11:59:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.956 11:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:33.956 11:59:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.956 11:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:33.956 11:59:41 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.956 11:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.231 MallocForNvmf0 00:04:34.231 11:59:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.231 11:59:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.488 MallocForNvmf1 00:04:34.488 11:59:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:34.488 11:59:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:34.744 [2024-07-22 11:59:42.517066] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.744 11:59:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:34.744 11:59:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.001 11:59:42 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.001 11:59:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.258 11:59:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.258 11:59:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.515 11:59:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:35.515 11:59:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:35.771 [2024-07-22 11:59:43.492360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:35.771 11:59:43 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:35.771 11:59:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.771 11:59:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.771 11:59:43 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:35.771 11:59:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.771 11:59:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.771 11:59:43 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:35.771 11:59:43 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:35.771 11:59:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.057 MallocBdevForConfigChangeCheck 00:04:36.057 11:59:43 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:36.057 11:59:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.057 11:59:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.057 11:59:43 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:36.057 11:59:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.353 11:59:44 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:36.353 INFO: shutting down applications... 00:04:36.353 11:59:44 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:36.353 11:59:44 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:36.353 11:59:44 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:36.353 11:59:44 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:38.247 Calling clear_iscsi_subsystem 00:04:38.247 Calling clear_nvmf_subsystem 00:04:38.247 Calling clear_nbd_subsystem 00:04:38.247 Calling clear_ublk_subsystem 00:04:38.247 Calling clear_vhost_blk_subsystem 00:04:38.247 Calling clear_vhost_scsi_subsystem 00:04:38.247 Calling clear_bdev_subsystem 00:04:38.247 11:59:45 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:38.247 11:59:45 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:38.247 11:59:45 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:38.247 11:59:45 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.247 11:59:45 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:38.247 11:59:45 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:38.504 11:59:46 json_config -- json_config/json_config.sh@349 -- # break 00:04:38.504 11:59:46 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:38.504 11:59:46 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:38.504 11:59:46 json_config -- json_config/common.sh@31 -- # local app=target 00:04:38.504 11:59:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.504 11:59:46 json_config -- json_config/common.sh@35 -- # [[ -n 856973 ]] 00:04:38.504 11:59:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 856973 00:04:38.504 11:59:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.504 11:59:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.504 11:59:46 json_config -- json_config/common.sh@41 -- # kill -0 856973 00:04:38.504 11:59:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.068 11:59:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.068 11:59:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.068 11:59:46 json_config -- json_config/common.sh@41 -- # kill -0 856973 00:04:39.068 11:59:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.068 11:59:46 json_config -- json_config/common.sh@43 -- # break 00:04:39.068 11:59:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.068 11:59:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.068 SPDK target shutdown done 00:04:39.068 11:59:46 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:39.068 INFO: relaunching applications... 00:04:39.068 11:59:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.068 11:59:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.068 11:59:46 json_config -- json_config/common.sh@10 -- # shift 00:04:39.068 11:59:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.068 11:59:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.068 11:59:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.068 11:59:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.068 11:59:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.068 11:59:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=858269 00:04:39.068 11:59:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.068 11:59:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.068 Waiting for target to run... 00:04:39.068 11:59:46 json_config -- json_config/common.sh@25 -- # waitforlisten 858269 /var/tmp/spdk_tgt.sock 00:04:39.068 11:59:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 858269 ']' 00:04:39.068 11:59:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.068 11:59:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.068 11:59:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.068 11:59:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.068 11:59:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.068 [2024-07-22 11:59:46.790764] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:39.068 [2024-07-22 11:59:46.790849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858269 ] 00:04:39.068 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.634 [2024-07-22 11:59:47.287474] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:39.634 [2024-07-22 11:59:47.321512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.634 [2024-07-22 11:59:47.403706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.912 [2024-07-22 11:59:50.438523] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.912 [2024-07-22 11:59:50.471005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.477 11:59:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.477 11:59:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:43.477 11:59:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.477 00:04:43.477 11:59:51 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:43.477 11:59:51 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:43.477 INFO: Checking if target configuration is the same... 00:04:43.477 11:59:51 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.477 11:59:51 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:43.477 11:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.477 + '[' 2 -ne 2 ']' 00:04:43.477 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.477 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:43.477 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.477 +++ basename /dev/fd/62 00:04:43.477 ++ mktemp /tmp/62.XXX 00:04:43.477 + tmp_file_1=/tmp/62.E5u 00:04:43.477 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.477 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.477 + tmp_file_2=/tmp/spdk_tgt_config.json.q7v 00:04:43.477 + ret=0 00:04:43.477 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.735 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.735 + diff -u /tmp/62.E5u /tmp/spdk_tgt_config.json.q7v 00:04:43.735 + echo 'INFO: JSON config files are the same' 00:04:43.735 INFO: JSON config files are the same 00:04:43.735 + rm /tmp/62.E5u /tmp/spdk_tgt_config.json.q7v 00:04:43.735 + exit 0 00:04:43.735 11:59:51 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:43.735 11:59:51 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:43.735 INFO: changing configuration and checking if this can be detected... 00:04:43.735 11:59:51 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.735 11:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.992 11:59:51 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.992 11:59:51 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:43.992 11:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.992 + '[' 2 -ne 2 ']' 00:04:43.992 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.992 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:43.992 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.992 +++ basename /dev/fd/62 00:04:43.992 ++ mktemp /tmp/62.XXX 00:04:43.992 + tmp_file_1=/tmp/62.9FA 00:04:43.992 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.992 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.992 + tmp_file_2=/tmp/spdk_tgt_config.json.ZFK 00:04:43.992 + ret=0 00:04:43.992 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.558 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.558 + diff -u /tmp/62.9FA /tmp/spdk_tgt_config.json.ZFK 00:04:44.558 + ret=1 00:04:44.558 + echo '=== Start of file: /tmp/62.9FA ===' 00:04:44.558 + cat /tmp/62.9FA 00:04:44.558 + echo '=== End of file: /tmp/62.9FA ===' 00:04:44.558 + echo '' 00:04:44.558 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZFK ===' 00:04:44.558 + cat /tmp/spdk_tgt_config.json.ZFK 00:04:44.558 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZFK ===' 00:04:44.558 + echo '' 00:04:44.558 + rm /tmp/62.9FA /tmp/spdk_tgt_config.json.ZFK 00:04:44.558 + exit 1 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:44.558 INFO: configuration change detected. 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@321 -- # [[ -n 858269 ]] 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.558 11:59:52 json_config -- json_config/json_config.sh@327 -- # killprocess 858269 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@948 -- # '[' -z 858269 ']' 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@952 -- # kill -0 858269 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@953 -- # uname 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858269 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858269' 00:04:44.558 killing process with pid 858269 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@967 -- # kill 858269 00:04:44.558 11:59:52 json_config -- common/autotest_common.sh@972 -- # wait 858269 00:04:46.456 11:59:54 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.456 11:59:54 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:46.456 11:59:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.456 11:59:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.456 11:59:54 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:46.456 11:59:54 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:46.456 INFO: Success 00:04:46.456 00:04:46.456 real 0m16.811s 00:04:46.456 user 0m18.659s 00:04:46.456 sys 0m2.253s 00:04:46.456 11:59:54 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.456 11:59:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.456 ************************************ 00:04:46.456 END TEST json_config 00:04:46.456 ************************************ 00:04:46.456 11:59:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.456 11:59:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.456 11:59:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.456 11:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.456 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:46.456 ************************************ 00:04:46.456 START TEST json_config_extra_key 00:04:46.456 ************************************ 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:46.456 11:59:54 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.456 11:59:54 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.456 11:59:54 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.456 11:59:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.456 11:59:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.456 11:59:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.456 11:59:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.456 11:59:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:46.456 11:59:54 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.456 INFO: launching applications... 00:04:46.456 11:59:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=859209 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.456 Waiting for target to run... 00:04:46.456 11:59:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 859209 /var/tmp/spdk_tgt.sock 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 859209 ']' 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.456 11:59:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.456 [2024-07-22 11:59:54.181474] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:46.456 [2024-07-22 11:59:54.181572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859209 ] 00:04:46.456 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.714 [2024-07-22 11:59:54.501573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:46.714 [2024-07-22 11:59:54.535873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.714 [2024-07-22 11:59:54.599863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.277 11:59:55 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.277 11:59:55 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.277 00:04:47.277 11:59:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.277 INFO: shutting down applications... 00:04:47.277 11:59:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 859209 ]] 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 859209 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 859209 00:04:47.277 11:59:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 859209 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.855 11:59:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.855 SPDK target shutdown done 00:04:47.855 11:59:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.855 Success 00:04:47.855 00:04:47.855 real 0m1.553s 00:04:47.855 user 0m1.496s 00:04:47.855 sys 0m0.438s 00:04:47.855 11:59:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.855 11:59:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.855 ************************************ 00:04:47.855 END TEST json_config_extra_key 00:04:47.855 ************************************ 00:04:47.855 11:59:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.855 11:59:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.855 11:59:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.855 11:59:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.855 11:59:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.855 ************************************ 00:04:47.855 START TEST alias_rpc 00:04:47.855 ************************************ 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.855 * Looking for test storage... 00:04:47.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:47.855 11:59:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.855 11:59:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=859441 00:04:47.855 11:59:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.855 11:59:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 859441 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 859441 ']' 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.855 11:59:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.113 [2024-07-22 11:59:55.789560] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:48.113 [2024-07-22 11:59:55.789675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859441 ] 00:04:48.113 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.113 [2024-07-22 11:59:55.822377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:48.113 [2024-07-22 11:59:55.855400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.113 [2024-07-22 11:59:55.946086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.371 11:59:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.371 11:59:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:48.371 11:59:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:48.628 11:59:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 859441 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 859441 ']' 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 859441 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859441 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.628 11:59:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859441' 00:04:48.628 killing process with pid 859441 00:04:48.629 11:59:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 859441 00:04:48.629 11:59:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 859441 00:04:49.193 00:04:49.193 real 0m1.222s 00:04:49.193 user 0m1.310s 00:04:49.193 sys 0m0.438s 00:04:49.193 11:59:56 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.193 11:59:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.193 ************************************ 00:04:49.193 END TEST alias_rpc 00:04:49.193 ************************************ 00:04:49.193 11:59:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.194 11:59:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:49.194 11:59:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.194 11:59:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.194 11:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.194 11:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:49.194 ************************************ 00:04:49.194 START TEST spdkcli_tcp 00:04:49.194 ************************************ 00:04:49.194 11:59:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.194 * Looking for test storage... 00:04:49.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.194 11:59:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.194 11:59:56 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.194 11:59:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.194 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=859704 00:04:49.194 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.194 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 859704 00:04:49.194 11:59:57 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 859704 ']' 00:04:49.194 11:59:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.194 11:59:57 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.194 11:59:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.194 11:59:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.194 11:59:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.194 [2024-07-22 11:59:57.054354] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:49.194 [2024-07-22 11:59:57.054447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859704 ] 00:04:49.194 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.194 [2024-07-22 11:59:57.085132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:49.194 [2024-07-22 11:59:57.111603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.452 [2024-07-22 11:59:57.196606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.452 [2024-07-22 11:59:57.196609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.710 11:59:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.710 11:59:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:49.710 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=859718 00:04:49.710 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:49.710 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:49.968 [ 00:04:49.968 "bdev_malloc_delete", 00:04:49.968 "bdev_malloc_create", 00:04:49.968 "bdev_null_resize", 00:04:49.968 "bdev_null_delete", 00:04:49.968 "bdev_null_create", 00:04:49.968 "bdev_nvme_cuse_unregister", 00:04:49.968 "bdev_nvme_cuse_register", 00:04:49.968 "bdev_opal_new_user", 00:04:49.968 "bdev_opal_set_lock_state", 00:04:49.968 "bdev_opal_delete", 00:04:49.968 "bdev_opal_get_info", 00:04:49.968 "bdev_opal_create", 00:04:49.968 "bdev_nvme_opal_revert", 00:04:49.968 "bdev_nvme_opal_init", 00:04:49.968 "bdev_nvme_send_cmd", 00:04:49.968 "bdev_nvme_get_path_iostat", 00:04:49.968 "bdev_nvme_get_mdns_discovery_info", 00:04:49.968 "bdev_nvme_stop_mdns_discovery", 00:04:49.968 "bdev_nvme_start_mdns_discovery", 00:04:49.968 "bdev_nvme_set_multipath_policy", 00:04:49.968 "bdev_nvme_set_preferred_path", 00:04:49.968 "bdev_nvme_get_io_paths", 00:04:49.968 "bdev_nvme_remove_error_injection", 00:04:49.968 "bdev_nvme_add_error_injection", 00:04:49.968 "bdev_nvme_get_discovery_info", 00:04:49.968 "bdev_nvme_stop_discovery", 00:04:49.968 "bdev_nvme_start_discovery", 00:04:49.968 "bdev_nvme_get_controller_health_info", 00:04:49.968 "bdev_nvme_disable_controller", 00:04:49.968 "bdev_nvme_enable_controller", 00:04:49.968 "bdev_nvme_reset_controller", 00:04:49.968 "bdev_nvme_get_transport_statistics", 00:04:49.968 "bdev_nvme_apply_firmware", 00:04:49.968 "bdev_nvme_detach_controller", 00:04:49.968 "bdev_nvme_get_controllers", 00:04:49.968 "bdev_nvme_attach_controller", 00:04:49.968 "bdev_nvme_set_hotplug", 00:04:49.968 "bdev_nvme_set_options", 00:04:49.968 "bdev_passthru_delete", 00:04:49.968 "bdev_passthru_create", 00:04:49.968 "bdev_lvol_set_parent_bdev", 00:04:49.968 "bdev_lvol_set_parent", 00:04:49.968 "bdev_lvol_check_shallow_copy", 00:04:49.968 "bdev_lvol_start_shallow_copy", 00:04:49.968 "bdev_lvol_grow_lvstore", 00:04:49.968 "bdev_lvol_get_lvols", 00:04:49.968 "bdev_lvol_get_lvstores", 00:04:49.968 "bdev_lvol_delete", 00:04:49.968 "bdev_lvol_set_read_only", 00:04:49.968 "bdev_lvol_resize", 00:04:49.968 "bdev_lvol_decouple_parent", 00:04:49.968 "bdev_lvol_inflate", 00:04:49.968 "bdev_lvol_rename", 00:04:49.969 "bdev_lvol_clone_bdev", 00:04:49.969 "bdev_lvol_clone", 00:04:49.969 "bdev_lvol_snapshot", 00:04:49.969 "bdev_lvol_create", 00:04:49.969 "bdev_lvol_delete_lvstore", 00:04:49.969 "bdev_lvol_rename_lvstore", 00:04:49.969 "bdev_lvol_create_lvstore", 00:04:49.969 "bdev_raid_set_options", 00:04:49.969 "bdev_raid_remove_base_bdev", 00:04:49.969 "bdev_raid_add_base_bdev", 00:04:49.969 "bdev_raid_delete", 00:04:49.969 "bdev_raid_create", 00:04:49.969 "bdev_raid_get_bdevs", 00:04:49.969 "bdev_error_inject_error", 00:04:49.969 "bdev_error_delete", 00:04:49.969 "bdev_error_create", 00:04:49.969 "bdev_split_delete", 00:04:49.969 "bdev_split_create", 00:04:49.969 "bdev_delay_delete", 00:04:49.969 "bdev_delay_create", 00:04:49.969 "bdev_delay_update_latency", 00:04:49.969 "bdev_zone_block_delete", 00:04:49.969 "bdev_zone_block_create", 00:04:49.969 "blobfs_create", 00:04:49.969 "blobfs_detect", 00:04:49.969 "blobfs_set_cache_size", 00:04:49.969 "bdev_aio_delete", 00:04:49.969 "bdev_aio_rescan", 00:04:49.969 "bdev_aio_create", 00:04:49.969 "bdev_ftl_set_property", 00:04:49.969 "bdev_ftl_get_properties", 00:04:49.969 "bdev_ftl_get_stats", 00:04:49.969 "bdev_ftl_unmap", 00:04:49.969 "bdev_ftl_unload", 00:04:49.969 "bdev_ftl_delete", 00:04:49.969 "bdev_ftl_load", 00:04:49.969 "bdev_ftl_create", 00:04:49.969 "bdev_virtio_attach_controller", 00:04:49.969 "bdev_virtio_scsi_get_devices", 00:04:49.969 "bdev_virtio_detach_controller", 00:04:49.969 "bdev_virtio_blk_set_hotplug", 00:04:49.969 "bdev_iscsi_delete", 00:04:49.969 "bdev_iscsi_create", 00:04:49.969 "bdev_iscsi_set_options", 00:04:49.969 "accel_error_inject_error", 00:04:49.969 "ioat_scan_accel_module", 00:04:49.969 "dsa_scan_accel_module", 00:04:49.969 "iaa_scan_accel_module", 00:04:49.969 "vfu_virtio_create_scsi_endpoint", 00:04:49.969 "vfu_virtio_scsi_remove_target", 00:04:49.969 "vfu_virtio_scsi_add_target", 00:04:49.969 "vfu_virtio_create_blk_endpoint", 00:04:49.969 "vfu_virtio_delete_endpoint", 00:04:49.969 "keyring_file_remove_key", 00:04:49.969 "keyring_file_add_key", 00:04:49.969 "keyring_linux_set_options", 00:04:49.969 "iscsi_get_histogram", 00:04:49.969 "iscsi_enable_histogram", 00:04:49.969 "iscsi_set_options", 00:04:49.969 "iscsi_get_auth_groups", 00:04:49.969 "iscsi_auth_group_remove_secret", 00:04:49.969 "iscsi_auth_group_add_secret", 00:04:49.969 "iscsi_delete_auth_group", 00:04:49.969 "iscsi_create_auth_group", 00:04:49.969 "iscsi_set_discovery_auth", 00:04:49.969 "iscsi_get_options", 00:04:49.969 "iscsi_target_node_request_logout", 00:04:49.969 "iscsi_target_node_set_redirect", 00:04:49.969 "iscsi_target_node_set_auth", 00:04:49.969 "iscsi_target_node_add_lun", 00:04:49.969 "iscsi_get_stats", 00:04:49.969 "iscsi_get_connections", 00:04:49.969 "iscsi_portal_group_set_auth", 00:04:49.969 "iscsi_start_portal_group", 00:04:49.969 "iscsi_delete_portal_group", 00:04:49.969 "iscsi_create_portal_group", 00:04:49.969 "iscsi_get_portal_groups", 00:04:49.969 "iscsi_delete_target_node", 00:04:49.969 "iscsi_target_node_remove_pg_ig_maps", 00:04:49.969 "iscsi_target_node_add_pg_ig_maps", 00:04:49.969 "iscsi_create_target_node", 00:04:49.969 "iscsi_get_target_nodes", 00:04:49.969 "iscsi_delete_initiator_group", 00:04:49.969 "iscsi_initiator_group_remove_initiators", 00:04:49.969 "iscsi_initiator_group_add_initiators", 00:04:49.969 "iscsi_create_initiator_group", 00:04:49.969 "iscsi_get_initiator_groups", 00:04:49.969 "nvmf_set_crdt", 00:04:49.969 "nvmf_set_config", 00:04:49.969 "nvmf_set_max_subsystems", 00:04:49.969 "nvmf_stop_mdns_prr", 00:04:49.969 "nvmf_publish_mdns_prr", 00:04:49.969 "nvmf_subsystem_get_listeners", 00:04:49.969 "nvmf_subsystem_get_qpairs", 00:04:49.969 "nvmf_subsystem_get_controllers", 00:04:49.969 "nvmf_get_stats", 00:04:49.969 "nvmf_get_transports", 00:04:49.969 "nvmf_create_transport", 00:04:49.969 "nvmf_get_targets", 00:04:49.969 "nvmf_delete_target", 00:04:49.969 "nvmf_create_target", 00:04:49.969 "nvmf_subsystem_allow_any_host", 00:04:49.969 "nvmf_subsystem_remove_host", 00:04:49.969 "nvmf_subsystem_add_host", 00:04:49.969 "nvmf_ns_remove_host", 00:04:49.969 "nvmf_ns_add_host", 00:04:49.969 "nvmf_subsystem_remove_ns", 00:04:49.969 "nvmf_subsystem_add_ns", 00:04:49.969 "nvmf_subsystem_listener_set_ana_state", 00:04:49.969 "nvmf_discovery_get_referrals", 00:04:49.969 "nvmf_discovery_remove_referral", 00:04:49.969 "nvmf_discovery_add_referral", 00:04:49.969 "nvmf_subsystem_remove_listener", 00:04:49.969 "nvmf_subsystem_add_listener", 00:04:49.969 "nvmf_delete_subsystem", 00:04:49.969 "nvmf_create_subsystem", 00:04:49.969 "nvmf_get_subsystems", 00:04:49.969 "env_dpdk_get_mem_stats", 00:04:49.969 "nbd_get_disks", 00:04:49.969 "nbd_stop_disk", 00:04:49.969 "nbd_start_disk", 00:04:49.969 "ublk_recover_disk", 00:04:49.969 "ublk_get_disks", 00:04:49.969 "ublk_stop_disk", 00:04:49.969 "ublk_start_disk", 00:04:49.969 "ublk_destroy_target", 00:04:49.969 "ublk_create_target", 00:04:49.969 "virtio_blk_create_transport", 00:04:49.969 "virtio_blk_get_transports", 00:04:49.969 "vhost_controller_set_coalescing", 00:04:49.969 "vhost_get_controllers", 00:04:49.969 "vhost_delete_controller", 00:04:49.969 "vhost_create_blk_controller", 00:04:49.969 "vhost_scsi_controller_remove_target", 00:04:49.969 "vhost_scsi_controller_add_target", 00:04:49.969 "vhost_start_scsi_controller", 00:04:49.969 "vhost_create_scsi_controller", 00:04:49.969 "thread_set_cpumask", 00:04:49.969 "framework_get_governor", 00:04:49.969 "framework_get_scheduler", 00:04:49.969 "framework_set_scheduler", 00:04:49.969 "framework_get_reactors", 00:04:49.969 "thread_get_io_channels", 00:04:49.969 "thread_get_pollers", 00:04:49.969 "thread_get_stats", 00:04:49.969 "framework_monitor_context_switch", 00:04:49.969 "spdk_kill_instance", 00:04:49.969 "log_enable_timestamps", 00:04:49.969 "log_get_flags", 00:04:49.969 "log_clear_flag", 00:04:49.969 "log_set_flag", 00:04:49.969 "log_get_level", 00:04:49.969 "log_set_level", 00:04:49.969 "log_get_print_level", 00:04:49.969 "log_set_print_level", 00:04:49.969 "framework_enable_cpumask_locks", 00:04:49.969 "framework_disable_cpumask_locks", 00:04:49.969 "framework_wait_init", 00:04:49.969 "framework_start_init", 00:04:49.969 "scsi_get_devices", 00:04:49.969 "bdev_get_histogram", 00:04:49.969 "bdev_enable_histogram", 00:04:49.969 "bdev_set_qos_limit", 00:04:49.969 "bdev_set_qd_sampling_period", 00:04:49.969 "bdev_get_bdevs", 00:04:49.969 "bdev_reset_iostat", 00:04:49.969 "bdev_get_iostat", 00:04:49.969 "bdev_examine", 00:04:49.969 "bdev_wait_for_examine", 00:04:49.969 "bdev_set_options", 00:04:49.969 "notify_get_notifications", 00:04:49.969 "notify_get_types", 00:04:49.969 "accel_get_stats", 00:04:49.969 "accel_set_options", 00:04:49.969 "accel_set_driver", 00:04:49.969 "accel_crypto_key_destroy", 00:04:49.969 "accel_crypto_keys_get", 00:04:49.969 "accel_crypto_key_create", 00:04:49.969 "accel_assign_opc", 00:04:49.969 "accel_get_module_info", 00:04:49.969 "accel_get_opc_assignments", 00:04:49.969 "vmd_rescan", 00:04:49.969 "vmd_remove_device", 00:04:49.969 "vmd_enable", 00:04:49.969 "sock_get_default_impl", 00:04:49.969 "sock_set_default_impl", 00:04:49.969 "sock_impl_set_options", 00:04:49.969 "sock_impl_get_options", 00:04:49.969 "iobuf_get_stats", 00:04:49.969 "iobuf_set_options", 00:04:49.969 "keyring_get_keys", 00:04:49.969 "framework_get_pci_devices", 00:04:49.969 "framework_get_config", 00:04:49.969 "framework_get_subsystems", 00:04:49.969 "vfu_tgt_set_base_path", 00:04:49.969 "trace_get_info", 00:04:49.969 "trace_get_tpoint_group_mask", 00:04:49.969 "trace_disable_tpoint_group", 00:04:49.969 "trace_enable_tpoint_group", 00:04:49.969 "trace_clear_tpoint_mask", 00:04:49.969 "trace_set_tpoint_mask", 00:04:49.969 "spdk_get_version", 00:04:49.969 "rpc_get_methods" 00:04:49.969 ] 00:04:49.969 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.969 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:49.969 11:59:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 859704 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 859704 ']' 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 859704 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859704 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859704' 00:04:49.969 killing process with pid 859704 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 859704 00:04:49.969 11:59:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 859704 00:04:50.228 00:04:50.228 real 0m1.197s 00:04:50.228 user 0m2.134s 00:04:50.228 sys 0m0.430s 00:04:50.228 11:59:58 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.228 11:59:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.228 ************************************ 00:04:50.228 END TEST spdkcli_tcp 00:04:50.228 ************************************ 00:04:50.487 11:59:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.487 11:59:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.487 11:59:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.487 11:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.487 11:59:58 -- common/autotest_common.sh@10 -- # set +x 00:04:50.487 ************************************ 00:04:50.487 START TEST dpdk_mem_utility 00:04:50.487 ************************************ 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.487 * Looking for test storage... 00:04:50.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:50.487 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:50.487 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=859904 00:04:50.487 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.487 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 859904 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 859904 ']' 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.487 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.487 [2024-07-22 11:59:58.299317] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:50.487 [2024-07-22 11:59:58.299413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859904 ] 00:04:50.487 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.487 [2024-07-22 11:59:58.331175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:50.487 [2024-07-22 11:59:58.357558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.746 [2024-07-22 11:59:58.443782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.004 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.004 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:51.004 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.004 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.004 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.004 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.004 { 00:04:51.004 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.004 } 00:04:51.004 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.004 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.004 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:51.004 1 heaps totaling size 814.000000 MiB 00:04:51.004 size: 814.000000 MiB heap id: 0 00:04:51.004 end heaps---------- 00:04:51.004 8 mempools totaling size 598.116089 MiB 00:04:51.005 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.005 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.005 size: 84.521057 MiB name: bdev_io_859904 00:04:51.005 size: 51.011292 MiB name: evtpool_859904 00:04:51.005 size: 50.003479 MiB name: msgpool_859904 00:04:51.005 size: 21.763794 MiB name: PDU_Pool 00:04:51.005 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.005 size: 0.026123 MiB name: Session_Pool 00:04:51.005 end mempools------- 00:04:51.005 6 memzones totaling size 4.142822 MiB 00:04:51.005 size: 1.000366 MiB name: RG_ring_0_859904 00:04:51.005 size: 1.000366 MiB name: RG_ring_1_859904 00:04:51.005 size: 1.000366 MiB name: RG_ring_4_859904 00:04:51.005 size: 1.000366 MiB name: RG_ring_5_859904 00:04:51.005 size: 0.125366 MiB name: RG_ring_2_859904 00:04:51.005 size: 0.015991 MiB name: RG_ring_3_859904 00:04:51.005 end memzones------- 00:04:51.005 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.005 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:51.005 list of free elements. size: 12.519348 MiB 00:04:51.005 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:51.005 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:51.005 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:51.005 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:51.005 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:51.005 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:51.005 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:51.005 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:51.005 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:51.005 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:51.005 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:51.005 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:51.005 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:51.005 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:51.005 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:51.005 list of standard malloc elements. size: 199.218079 MiB 00:04:51.005 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:51.005 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:51.005 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:51.005 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:51.005 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:51.005 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:51.005 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:51.005 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:51.005 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:51.005 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:51.005 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:51.005 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:51.005 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:51.005 list of memzone associated elements. size: 602.262573 MiB 00:04:51.005 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:51.005 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.005 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:51.005 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.005 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:51.005 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_859904_0 00:04:51.005 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:51.005 associated memzone info: size: 48.002930 MiB name: MP_evtpool_859904_0 00:04:51.005 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:51.005 associated memzone info: size: 48.002930 MiB name: MP_msgpool_859904_0 00:04:51.005 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:51.005 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.005 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:51.005 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.005 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:51.005 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_859904 00:04:51.005 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:51.005 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_859904 00:04:51.005 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:51.005 associated memzone info: size: 1.007996 MiB name: MP_evtpool_859904 00:04:51.005 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:51.005 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.005 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:51.005 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.005 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:51.005 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.005 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:51.005 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.005 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:51.005 associated memzone info: size: 1.000366 MiB name: RG_ring_0_859904 00:04:51.005 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:51.005 associated memzone info: size: 1.000366 MiB name: RG_ring_1_859904 00:04:51.005 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:51.005 associated memzone info: size: 1.000366 MiB name: RG_ring_4_859904 00:04:51.005 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:51.005 associated memzone info: size: 1.000366 MiB name: RG_ring_5_859904 00:04:51.005 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:51.005 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_859904 00:04:51.005 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:51.005 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.005 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:51.005 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.005 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:51.005 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.005 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:51.005 associated memzone info: size: 0.125366 MiB name: RG_ring_2_859904 00:04:51.005 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:51.005 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.005 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:51.005 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.005 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:51.005 associated memzone info: size: 0.015991 MiB name: RG_ring_3_859904 00:04:51.005 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:51.005 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.005 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:51.005 associated memzone info: size: 0.000183 MiB name: MP_msgpool_859904 00:04:51.005 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:51.005 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_859904 00:04:51.005 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:51.005 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.005 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.005 11:59:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 859904 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 859904 ']' 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 859904 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859904 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859904' 00:04:51.005 killing process with pid 859904 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 859904 00:04:51.005 11:59:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 859904 00:04:51.572 00:04:51.572 real 0m1.037s 00:04:51.572 user 0m0.997s 00:04:51.572 sys 0m0.411s 00:04:51.572 11:59:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.572 11:59:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.572 ************************************ 00:04:51.572 END TEST dpdk_mem_utility 00:04:51.572 ************************************ 00:04:51.572 11:59:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.572 11:59:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.572 11:59:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.572 11:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.572 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.572 ************************************ 00:04:51.572 START TEST event 00:04:51.572 ************************************ 00:04:51.572 11:59:59 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.572 * Looking for test storage... 00:04:51.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:51.572 11:59:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:51.572 11:59:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:51.572 11:59:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.572 11:59:59 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:51.572 11:59:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.572 11:59:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.572 ************************************ 00:04:51.572 START TEST event_perf 00:04:51.572 ************************************ 00:04:51.572 11:59:59 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.572 Running I/O for 1 seconds...[2024-07-22 11:59:59.376485] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:51.572 [2024-07-22 11:59:59.376552] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860094 ] 00:04:51.572 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.572 [2024-07-22 11:59:59.410081] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.572 [2024-07-22 11:59:59.440336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.866 [2024-07-22 11:59:59.534132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.866 [2024-07-22 11:59:59.534162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.866 [2024-07-22 11:59:59.534224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.866 [2024-07-22 11:59:59.534227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.800 Running I/O for 1 seconds... 00:04:52.800 lcore 0: 227896 00:04:52.800 lcore 1: 227896 00:04:52.800 lcore 2: 227895 00:04:52.800 lcore 3: 227896 00:04:52.800 done. 00:04:52.800 00:04:52.800 real 0m1.250s 00:04:52.800 user 0m4.158s 00:04:52.800 sys 0m0.085s 00:04:52.800 12:00:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.800 12:00:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.800 ************************************ 00:04:52.800 END TEST event_perf 00:04:52.800 ************************************ 00:04:52.800 12:00:00 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.800 12:00:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.800 12:00:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:52.800 12:00:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.800 12:00:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.800 ************************************ 00:04:52.800 START TEST event_reactor 00:04:52.800 ************************************ 00:04:52.800 12:00:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:52.800 [2024-07-22 12:00:00.676138] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:52.800 [2024-07-22 12:00:00.676206] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860307 ] 00:04:52.800 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.800 [2024-07-22 12:00:00.708901] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:53.060 [2024-07-22 12:00:00.741074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.060 [2024-07-22 12:00:00.832202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.994 test_start 00:04:53.994 oneshot 00:04:53.994 tick 100 00:04:53.994 tick 100 00:04:53.994 tick 250 00:04:53.994 tick 100 00:04:53.994 tick 100 00:04:53.994 tick 100 00:04:53.994 tick 250 00:04:53.994 tick 500 00:04:53.994 tick 100 00:04:53.994 tick 100 00:04:53.994 tick 250 00:04:53.994 tick 100 00:04:53.994 tick 100 00:04:53.994 test_end 00:04:53.994 00:04:53.994 real 0m1.249s 00:04:53.994 user 0m1.158s 00:04:53.994 sys 0m0.086s 00:04:53.994 12:00:01 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.994 12:00:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.994 ************************************ 00:04:53.994 END TEST event_reactor 00:04:53.994 ************************************ 00:04:54.250 12:00:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:54.250 12:00:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.250 12:00:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:54.250 12:00:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.250 12:00:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.250 ************************************ 00:04:54.250 START TEST event_reactor_perf 00:04:54.250 ************************************ 00:04:54.250 12:00:01 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.250 [2024-07-22 12:00:01.971748] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:54.250 [2024-07-22 12:00:01.971806] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860539 ] 00:04:54.250 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.250 [2024-07-22 12:00:02.007496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.250 [2024-07-22 12:00:02.037854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.250 [2024-07-22 12:00:02.129363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.626 test_start 00:04:55.626 test_end 00:04:55.626 Performance: 356149 events per second 00:04:55.626 00:04:55.626 real 0m1.246s 00:04:55.626 user 0m1.156s 00:04:55.626 sys 0m0.084s 00:04:55.626 12:00:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.626 12:00:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.626 ************************************ 00:04:55.626 END TEST event_reactor_perf 00:04:55.626 ************************************ 00:04:55.626 12:00:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:55.626 12:00:03 event -- event/event.sh@49 -- # uname -s 00:04:55.626 12:00:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:55.626 12:00:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:55.626 12:00:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.626 12:00:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.626 12:00:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.626 ************************************ 00:04:55.626 START TEST event_scheduler 00:04:55.626 ************************************ 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:55.626 * Looking for test storage... 00:04:55.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:55.626 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:55.626 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=860726 00:04:55.626 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:55.626 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.626 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 860726 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 860726 ']' 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.626 12:00:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.626 [2024-07-22 12:00:03.346758] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:55.626 [2024-07-22 12:00:03.346857] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860726 ] 00:04:55.626 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.626 [2024-07-22 12:00:03.380885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:55.626 [2024-07-22 12:00:03.408888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.626 [2024-07-22 12:00:03.502665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.626 [2024-07-22 12:00:03.502726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.626 [2024-07-22 12:00:03.502757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.626 [2024-07-22 12:00:03.502760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:55.883 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 [2024-07-22 12:00:03.591727] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:55.883 [2024-07-22 12:00:03.591753] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:55.883 [2024-07-22 12:00:03.591770] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:55.883 [2024-07-22 12:00:03.591786] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:55.883 [2024-07-22 12:00:03.591798] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 [2024-07-22 12:00:03.683771] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 ************************************ 00:04:55.883 START TEST scheduler_create_thread 00:04:55.883 ************************************ 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 2 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 3 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 4 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 5 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 6 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 7 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 8 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 9 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 10 00:04:55.883 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.884 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.139 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.140 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.140 12:00:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.140 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.140 12:00:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.067 12:00:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.067 00:04:57.067 real 0m1.173s 00:04:57.067 user 0m0.007s 00:04:57.067 sys 0m0.006s 00:04:57.067 12:00:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.067 12:00:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.067 ************************************ 00:04:57.067 END TEST scheduler_create_thread 00:04:57.067 ************************************ 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:57.067 12:00:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.067 12:00:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 860726 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 860726 ']' 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 860726 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860726 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860726' 00:04:57.067 killing process with pid 860726 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 860726 00:04:57.067 12:00:04 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 860726 00:04:57.642 [2024-07-22 12:00:05.365447] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.899 00:04:57.899 real 0m2.321s 00:04:57.899 user 0m2.792s 00:04:57.899 sys 0m0.344s 00:04:57.899 12:00:05 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.899 12:00:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 END TEST event_scheduler 00:04:57.899 ************************************ 00:04:57.899 12:00:05 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.899 12:00:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.899 12:00:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.899 12:00:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.899 12:00:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.899 12:00:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 START TEST app_repeat 00:04:57.899 ************************************ 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=861046 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 861046' 00:04:57.899 Process app_repeat pid: 861046 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.899 spdk_app_start Round 0 00:04:57.899 12:00:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 861046 /var/tmp/spdk-nbd.sock 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 861046 ']' 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.899 12:00:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 [2024-07-22 12:00:05.649665] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:04:57.900 [2024-07-22 12:00:05.649741] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861046 ] 00:04:57.900 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.900 [2024-07-22 12:00:05.681916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:57.900 [2024-07-22 12:00:05.713108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.900 [2024-07-22 12:00:05.810642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.900 [2024-07-22 12:00:05.810654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.157 12:00:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.157 12:00:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.157 12:00:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.414 Malloc0 00:04:58.414 12:00:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.671 Malloc1 00:04:58.671 12:00:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.671 12:00:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.928 /dev/nbd0 00:04:58.928 12:00:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.928 12:00:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.928 1+0 records in 00:04:58.928 1+0 records out 00:04:58.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193839 s, 21.1 MB/s 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:58.928 12:00:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.929 12:00:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:58.929 12:00:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:58.929 12:00:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.929 12:00:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.929 12:00:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.185 /dev/nbd1 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.185 1+0 records in 00:04:59.185 1+0 records out 00:04:59.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204267 s, 20.1 MB/s 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.185 12:00:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.185 12:00:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.442 12:00:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.442 { 00:04:59.442 "nbd_device": "/dev/nbd0", 00:04:59.442 "bdev_name": "Malloc0" 00:04:59.442 }, 00:04:59.442 { 00:04:59.442 "nbd_device": "/dev/nbd1", 00:04:59.442 "bdev_name": "Malloc1" 00:04:59.442 } 00:04:59.442 ]' 00:04:59.442 12:00:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.442 { 00:04:59.442 "nbd_device": "/dev/nbd0", 00:04:59.442 "bdev_name": "Malloc0" 00:04:59.442 }, 00:04:59.442 { 00:04:59.442 "nbd_device": "/dev/nbd1", 00:04:59.442 "bdev_name": "Malloc1" 00:04:59.442 } 00:04:59.442 ]' 00:04:59.442 12:00:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.442 12:00:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.442 /dev/nbd1' 00:04:59.442 12:00:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.442 /dev/nbd1' 00:04:59.442 12:00:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.443 256+0 records in 00:04:59.443 256+0 records out 00:04:59.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412256 s, 254 MB/s 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.443 256+0 records in 00:04:59.443 256+0 records out 00:04:59.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208662 s, 50.3 MB/s 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.443 12:00:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.704 256+0 records in 00:04:59.704 256+0 records out 00:04:59.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261509 s, 40.1 MB/s 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.704 12:00:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.961 12:00:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.218 12:00:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.474 12:00:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.474 12:00:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.731 12:00:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.987 [2024-07-22 12:00:08.793079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.987 [2024-07-22 12:00:08.884129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.987 [2024-07-22 12:00:08.884135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.244 [2024-07-22 12:00:08.941227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.244 [2024-07-22 12:00:08.941298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.763 12:00:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.763 12:00:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.763 spdk_app_start Round 1 00:05:03.763 12:00:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 861046 /var/tmp/spdk-nbd.sock 00:05:03.763 12:00:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 861046 ']' 00:05:03.763 12:00:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.763 12:00:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.763 12:00:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.763 12:00:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.763 12:00:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.021 12:00:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.021 12:00:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.021 12:00:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.277 Malloc0 00:05:04.277 12:00:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.534 Malloc1 00:05:04.534 12:00:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.534 12:00:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.790 /dev/nbd0 00:05:04.790 12:00:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.790 12:00:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.790 1+0 records in 00:05:04.790 1+0 records out 00:05:04.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018995 s, 21.6 MB/s 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:04.790 12:00:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:04.790 12:00:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.790 12:00:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.790 12:00:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.047 /dev/nbd1 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.047 1+0 records in 00:05:05.047 1+0 records out 00:05:05.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229663 s, 17.8 MB/s 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.047 12:00:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.047 12:00:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.304 { 00:05:05.304 "nbd_device": "/dev/nbd0", 00:05:05.304 "bdev_name": "Malloc0" 00:05:05.304 }, 00:05:05.304 { 00:05:05.304 "nbd_device": "/dev/nbd1", 00:05:05.304 "bdev_name": "Malloc1" 00:05:05.304 } 00:05:05.304 ]' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.304 { 00:05:05.304 "nbd_device": "/dev/nbd0", 00:05:05.304 "bdev_name": "Malloc0" 00:05:05.304 }, 00:05:05.304 { 00:05:05.304 "nbd_device": "/dev/nbd1", 00:05:05.304 "bdev_name": "Malloc1" 00:05:05.304 } 00:05:05.304 ]' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.304 /dev/nbd1' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.304 /dev/nbd1' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.304 256+0 records in 00:05:05.304 256+0 records out 00:05:05.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503742 s, 208 MB/s 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.304 256+0 records in 00:05:05.304 256+0 records out 00:05:05.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239457 s, 43.8 MB/s 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.304 256+0 records in 00:05:05.304 256+0 records out 00:05:05.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255388 s, 41.1 MB/s 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.304 12:00:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.563 12:00:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.821 12:00:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.085 12:00:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.405 12:00:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.405 12:00:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.664 12:00:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.664 [2024-07-22 12:00:14.549662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.920 [2024-07-22 12:00:14.641463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.920 [2024-07-22 12:00:14.641468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.920 [2024-07-22 12:00:14.704401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.920 [2024-07-22 12:00:14.704496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.440 12:00:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.440 12:00:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.440 spdk_app_start Round 2 00:05:09.440 12:00:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 861046 /var/tmp/spdk-nbd.sock 00:05:09.440 12:00:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 861046 ']' 00:05:09.440 12:00:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.440 12:00:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.440 12:00:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.440 12:00:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.440 12:00:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.697 12:00:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.697 12:00:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:09.697 12:00:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.954 Malloc0 00:05:09.954 12:00:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.211 Malloc1 00:05:10.211 12:00:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.211 12:00:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.467 /dev/nbd0 00:05:10.467 12:00:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.467 12:00:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.467 12:00:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:10.467 12:00:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:10.467 12:00:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.467 12:00:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.468 1+0 records in 00:05:10.468 1+0 records out 00:05:10.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162768 s, 25.2 MB/s 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.468 12:00:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:10.468 12:00:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.468 12:00:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.468 12:00:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.724 /dev/nbd1 00:05:10.724 12:00:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.724 12:00:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.724 1+0 records in 00:05:10.724 1+0 records out 00:05:10.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018562 s, 22.1 MB/s 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.724 12:00:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.981 { 00:05:10.981 "nbd_device": "/dev/nbd0", 00:05:10.981 "bdev_name": "Malloc0" 00:05:10.981 }, 00:05:10.981 { 00:05:10.981 "nbd_device": "/dev/nbd1", 00:05:10.981 "bdev_name": "Malloc1" 00:05:10.981 } 00:05:10.981 ]' 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.981 { 00:05:10.981 "nbd_device": "/dev/nbd0", 00:05:10.981 "bdev_name": "Malloc0" 00:05:10.981 }, 00:05:10.981 { 00:05:10.981 "nbd_device": "/dev/nbd1", 00:05:10.981 "bdev_name": "Malloc1" 00:05:10.981 } 00:05:10.981 ]' 00:05:10.981 12:00:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.238 /dev/nbd1' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.238 /dev/nbd1' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.238 256+0 records in 00:05:11.238 256+0 records out 00:05:11.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041001 s, 256 MB/s 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.238 256+0 records in 00:05:11.238 256+0 records out 00:05:11.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240112 s, 43.7 MB/s 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.238 256+0 records in 00:05:11.238 256+0 records out 00:05:11.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255332 s, 41.1 MB/s 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.238 12:00:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.238 12:00:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.495 12:00:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.752 12:00:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.009 12:00:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.009 12:00:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.265 12:00:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.522 [2024-07-22 12:00:20.337107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.522 [2024-07-22 12:00:20.426682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.522 [2024-07-22 12:00:20.426687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.780 [2024-07-22 12:00:20.485041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.780 [2024-07-22 12:00:20.485109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.307 12:00:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 861046 /var/tmp/spdk-nbd.sock 00:05:15.307 12:00:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 861046 ']' 00:05:15.307 12:00:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.307 12:00:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.307 12:00:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.307 12:00:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.307 12:00:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:15.564 12:00:23 event.app_repeat -- event/event.sh@39 -- # killprocess 861046 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 861046 ']' 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 861046 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861046 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861046' 00:05:15.564 killing process with pid 861046 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@967 -- # kill 861046 00:05:15.564 12:00:23 event.app_repeat -- common/autotest_common.sh@972 -- # wait 861046 00:05:15.820 spdk_app_start is called in Round 0. 00:05:15.820 Shutdown signal received, stop current app iteration 00:05:15.820 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 reinitialization... 00:05:15.820 spdk_app_start is called in Round 1. 00:05:15.820 Shutdown signal received, stop current app iteration 00:05:15.820 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 reinitialization... 00:05:15.820 spdk_app_start is called in Round 2. 00:05:15.820 Shutdown signal received, stop current app iteration 00:05:15.820 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 reinitialization... 00:05:15.820 spdk_app_start is called in Round 3. 00:05:15.820 Shutdown signal received, stop current app iteration 00:05:15.820 12:00:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.820 12:00:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:15.820 00:05:15.820 real 0m17.936s 00:05:15.820 user 0m39.028s 00:05:15.820 sys 0m3.228s 00:05:15.820 12:00:23 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.820 12:00:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.820 ************************************ 00:05:15.820 END TEST app_repeat 00:05:15.820 ************************************ 00:05:15.820 12:00:23 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.820 12:00:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.820 12:00:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.820 12:00:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.820 12:00:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.820 12:00:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.820 ************************************ 00:05:15.820 START TEST cpu_locks 00:05:15.820 ************************************ 00:05:15.820 12:00:23 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.820 * Looking for test storage... 00:05:15.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.820 12:00:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.820 12:00:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.820 12:00:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.820 12:00:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.820 12:00:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.820 12:00:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.820 12:00:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.820 ************************************ 00:05:15.820 START TEST default_locks 00:05:15.820 ************************************ 00:05:15.820 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:15.820 12:00:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=863904 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 863904 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 863904 ']' 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.821 12:00:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.821 [2024-07-22 12:00:23.741942] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:15.821 [2024-07-22 12:00:23.742025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863904 ] 00:05:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.078 [2024-07-22 12:00:23.772567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:16.078 [2024-07-22 12:00:23.804396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.078 [2024-07-22 12:00:23.898261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.336 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.336 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:16.336 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 863904 00:05:16.336 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 863904 00:05:16.336 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.593 lslocks: write error 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 863904 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 863904 ']' 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 863904 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 863904 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 863904' 00:05:16.593 killing process with pid 863904 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 863904 00:05:16.593 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 863904 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 863904 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 863904 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:17.157 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 863904 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 863904 ']' 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (863904) - No such process 00:05:17.158 ERROR: process (pid: 863904) is no longer running 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.158 00:05:17.158 real 0m1.233s 00:05:17.158 user 0m1.201s 00:05:17.158 sys 0m0.547s 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.158 12:00:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 ************************************ 00:05:17.158 END TEST default_locks 00:05:17.158 ************************************ 00:05:17.158 12:00:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.158 12:00:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:17.158 12:00:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.158 12:00:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.158 12:00:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 ************************************ 00:05:17.158 START TEST default_locks_via_rpc 00:05:17.158 ************************************ 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=864082 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 864082 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 864082 ']' 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.158 12:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 [2024-07-22 12:00:25.032731] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:17.158 [2024-07-22 12:00:25.032823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864082 ] 00:05:17.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.158 [2024-07-22 12:00:25.065220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:17.415 [2024-07-22 12:00:25.096145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.415 [2024-07-22 12:00:25.190888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 864082 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 864082 00:05:17.672 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 864082 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 864082 ']' 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 864082 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864082 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864082' 00:05:17.929 killing process with pid 864082 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 864082 00:05:17.929 12:00:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 864082 00:05:18.492 00:05:18.492 real 0m1.180s 00:05:18.492 user 0m1.137s 00:05:18.492 sys 0m0.535s 00:05:18.492 12:00:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.492 12:00:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 ************************************ 00:05:18.492 END TEST default_locks_via_rpc 00:05:18.492 ************************************ 00:05:18.492 12:00:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.492 12:00:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:18.492 12:00:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.492 12:00:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.492 12:00:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 ************************************ 00:05:18.492 START TEST non_locking_app_on_locked_coremask 00:05:18.492 ************************************ 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=864325 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 864325 /var/tmp/spdk.sock 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 864325 ']' 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.492 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 [2024-07-22 12:00:26.256228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:18.492 [2024-07-22 12:00:26.256325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864325 ] 00:05:18.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.492 [2024-07-22 12:00:26.288800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:18.492 [2024-07-22 12:00:26.314682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.492 [2024-07-22 12:00:26.403010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=864351 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 864351 /var/tmp/spdk2.sock 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 864351 ']' 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.749 12:00:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.007 [2024-07-22 12:00:26.699041] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:19.007 [2024-07-22 12:00:26.699112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864351 ] 00:05:19.007 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.007 [2024-07-22 12:00:26.733339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.007 [2024-07-22 12:00:26.791561] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.007 [2024-07-22 12:00:26.791592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.264 [2024-07-22 12:00:26.976216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.827 12:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.827 12:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:19.827 12:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 864325 00:05:19.827 12:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 864325 00:05:19.827 12:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.393 lslocks: write error 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 864325 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 864325 ']' 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 864325 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864325 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864325' 00:05:20.393 killing process with pid 864325 00:05:20.393 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 864325 00:05:20.394 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 864325 00:05:21.369 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 864351 00:05:21.369 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 864351 ']' 00:05:21.369 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 864351 00:05:21.369 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:21.370 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.370 12:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864351 00:05:21.370 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.370 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.370 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864351' 00:05:21.370 killing process with pid 864351 00:05:21.370 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 864351 00:05:21.370 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 864351 00:05:21.628 00:05:21.628 real 0m3.200s 00:05:21.628 user 0m3.337s 00:05:21.628 sys 0m1.054s 00:05:21.628 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.628 12:00:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.628 ************************************ 00:05:21.628 END TEST non_locking_app_on_locked_coremask 00:05:21.628 ************************************ 00:05:21.628 12:00:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:21.628 12:00:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:21.628 12:00:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.628 12:00:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.628 12:00:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.628 ************************************ 00:05:21.628 START TEST locking_app_on_unlocked_coremask 00:05:21.628 ************************************ 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=864664 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 864664 /var/tmp/spdk.sock 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 864664 ']' 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.628 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.628 [2024-07-22 12:00:29.512869] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:21.628 [2024-07-22 12:00:29.512961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864664 ] 00:05:21.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.628 [2024-07-22 12:00:29.546041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.886 [2024-07-22 12:00:29.578674] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.886 [2024-07-22 12:00:29.578705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.886 [2024-07-22 12:00:29.670150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=864792 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 864792 /var/tmp/spdk2.sock 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 864792 ']' 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.143 12:00:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.143 [2024-07-22 12:00:29.971966] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:22.143 [2024-07-22 12:00:29.972046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864792 ] 00:05:22.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.143 [2024-07-22 12:00:30.005489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.143 [2024-07-22 12:00:30.062784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.401 [2024-07-22 12:00:30.245294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.340 12:00:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.340 12:00:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:23.340 12:00:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 864792 00:05:23.340 12:00:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 864792 00:05:23.340 12:00:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.596 lslocks: write error 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 864664 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 864664 ']' 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 864664 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864664 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864664' 00:05:23.596 killing process with pid 864664 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 864664 00:05:23.596 12:00:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 864664 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 864792 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 864792 ']' 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 864792 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864792 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864792' 00:05:24.525 killing process with pid 864792 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 864792 00:05:24.525 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 864792 00:05:24.783 00:05:24.783 real 0m3.143s 00:05:24.783 user 0m3.308s 00:05:24.783 sys 0m1.037s 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.783 ************************************ 00:05:24.783 END TEST locking_app_on_unlocked_coremask 00:05:24.783 ************************************ 00:05:24.783 12:00:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.783 12:00:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:24.783 12:00:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.783 12:00:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.783 12:00:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.783 ************************************ 00:05:24.783 START TEST locking_app_on_locked_coremask 00:05:24.783 ************************************ 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=865098 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 865098 /var/tmp/spdk.sock 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 865098 ']' 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.783 12:00:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.783 [2024-07-22 12:00:32.709263] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:24.784 [2024-07-22 12:00:32.709349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865098 ] 00:05:25.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.042 [2024-07-22 12:00:32.740640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.042 [2024-07-22 12:00:32.772191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.042 [2024-07-22 12:00:32.861293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.339 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=865222 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 865222 /var/tmp/spdk2.sock 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 865222 /var/tmp/spdk2.sock 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 865222 /var/tmp/spdk2.sock 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 865222 ']' 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.340 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.340 [2024-07-22 12:00:33.180692] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:25.340 [2024-07-22 12:00:33.180780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865222 ] 00:05:25.340 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.340 [2024-07-22 12:00:33.215407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.596 [2024-07-22 12:00:33.279443] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 865098 has claimed it. 00:05:25.596 [2024-07-22 12:00:33.279491] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (865222) - No such process 00:05:26.159 ERROR: process (pid: 865222) is no longer running 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 865098 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 865098 00:05:26.159 12:00:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.416 lslocks: write error 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 865098 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 865098 ']' 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 865098 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865098 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865098' 00:05:26.416 killing process with pid 865098 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 865098 00:05:26.416 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 865098 00:05:26.979 00:05:26.979 real 0m1.985s 00:05:26.979 user 0m2.112s 00:05:26.979 sys 0m0.653s 00:05:26.979 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.979 12:00:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.979 ************************************ 00:05:26.979 END TEST locking_app_on_locked_coremask 00:05:26.979 ************************************ 00:05:26.979 12:00:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:26.980 12:00:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:26.980 12:00:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.980 12:00:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.980 12:00:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.980 ************************************ 00:05:26.980 START TEST locking_overlapped_coremask 00:05:26.980 ************************************ 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=865395 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 865395 /var/tmp/spdk.sock 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 865395 ']' 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.980 12:00:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.980 [2024-07-22 12:00:34.738029] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:26.980 [2024-07-22 12:00:34.738126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865395 ] 00:05:26.980 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.980 [2024-07-22 12:00:34.770318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.980 [2024-07-22 12:00:34.796090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.980 [2024-07-22 12:00:34.885647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.980 [2024-07-22 12:00:34.885712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.980 [2024-07-22 12:00:34.885716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=865407 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 865407 /var/tmp/spdk2.sock 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 865407 /var/tmp/spdk2.sock 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 865407 /var/tmp/spdk2.sock 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 865407 ']' 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.237 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.494 [2024-07-22 12:00:35.195206] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:27.494 [2024-07-22 12:00:35.195310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865407 ] 00:05:27.494 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.494 [2024-07-22 12:00:35.234747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.494 [2024-07-22 12:00:35.289574] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 865395 has claimed it. 00:05:27.494 [2024-07-22 12:00:35.289634] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (865407) - No such process 00:05:28.058 ERROR: process (pid: 865407) is no longer running 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 865395 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 865395 ']' 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 865395 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865395 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865395' 00:05:28.058 killing process with pid 865395 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 865395 00:05:28.058 12:00:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 865395 00:05:28.626 00:05:28.626 real 0m1.641s 00:05:28.626 user 0m4.477s 00:05:28.626 sys 0m0.456s 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.626 ************************************ 00:05:28.626 END TEST locking_overlapped_coremask 00:05:28.626 ************************************ 00:05:28.626 12:00:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.626 12:00:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.626 12:00:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.626 12:00:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.626 12:00:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.626 ************************************ 00:05:28.626 START TEST locking_overlapped_coremask_via_rpc 00:05:28.626 ************************************ 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=865668 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 865668 /var/tmp/spdk.sock 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 865668 ']' 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.626 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.626 [2024-07-22 12:00:36.428683] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:28.626 [2024-07-22 12:00:36.428784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865668 ] 00:05:28.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.627 [2024-07-22 12:00:36.461806] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.627 [2024-07-22 12:00:36.488162] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.627 [2024-07-22 12:00:36.488186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.892 [2024-07-22 12:00:36.580102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.892 [2024-07-22 12:00:36.580166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.892 [2024-07-22 12:00:36.580169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.149 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.149 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.149 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=865701 00:05:29.149 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 865701 /var/tmp/spdk2.sock 00:05:29.149 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.150 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 865701 ']' 00:05:29.150 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.150 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.150 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.150 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.150 12:00:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.150 [2024-07-22 12:00:36.875016] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:29.150 [2024-07-22 12:00:36.875098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865701 ] 00:05:29.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.150 [2024-07-22 12:00:36.908662] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.150 [2024-07-22 12:00:36.962732] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.150 [2024-07-22 12:00:36.962758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.406 [2024-07-22 12:00:37.138864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.406 [2024-07-22 12:00:37.138923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.406 [2024-07-22 12:00:37.138925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.970 [2024-07-22 12:00:37.817709] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 865668 has claimed it. 00:05:29.970 request: 00:05:29.970 { 00:05:29.970 "method": "framework_enable_cpumask_locks", 00:05:29.970 "req_id": 1 00:05:29.970 } 00:05:29.970 Got JSON-RPC error response 00:05:29.970 response: 00:05:29.970 { 00:05:29.970 "code": -32603, 00:05:29.970 "message": "Failed to claim CPU core: 2" 00:05:29.970 } 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 865668 /var/tmp/spdk.sock 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 865668 ']' 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.970 12:00:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 865701 /var/tmp/spdk2.sock 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 865701 ']' 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.228 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.486 00:05:30.486 real 0m1.951s 00:05:30.486 user 0m1.030s 00:05:30.486 sys 0m0.168s 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.486 12:00:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.486 ************************************ 00:05:30.486 END TEST locking_overlapped_coremask_via_rpc 00:05:30.486 ************************************ 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.486 12:00:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.486 12:00:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 865668 ]] 00:05:30.486 12:00:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 865668 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 865668 ']' 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 865668 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865668 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865668' 00:05:30.486 killing process with pid 865668 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 865668 00:05:30.486 12:00:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 865668 00:05:31.051 12:00:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 865701 ]] 00:05:31.051 12:00:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 865701 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 865701 ']' 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 865701 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865701 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865701' 00:05:31.051 killing process with pid 865701 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 865701 00:05:31.051 12:00:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 865701 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 865668 ]] 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 865668 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 865668 ']' 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 865668 00:05:31.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (865668) - No such process 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 865668 is not found' 00:05:31.310 Process with pid 865668 is not found 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 865701 ]] 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 865701 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 865701 ']' 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 865701 00:05:31.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (865701) - No such process 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 865701 is not found' 00:05:31.310 Process with pid 865701 is not found 00:05:31.310 12:00:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.310 00:05:31.310 real 0m15.579s 00:05:31.310 user 0m27.283s 00:05:31.310 sys 0m5.333s 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.310 12:00:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.310 ************************************ 00:05:31.310 END TEST cpu_locks 00:05:31.310 ************************************ 00:05:31.310 12:00:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:31.310 00:05:31.310 real 0m39.933s 00:05:31.310 user 1m15.720s 00:05:31.310 sys 0m9.385s 00:05:31.310 12:00:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.310 12:00:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.310 ************************************ 00:05:31.310 END TEST event 00:05:31.310 ************************************ 00:05:31.310 12:00:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.310 12:00:39 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:31.310 12:00:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.310 12:00:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.310 12:00:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.567 ************************************ 00:05:31.567 START TEST thread 00:05:31.567 ************************************ 00:05:31.567 12:00:39 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:31.567 * Looking for test storage... 00:05:31.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:31.567 12:00:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.567 12:00:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:31.567 12:00:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.567 12:00:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.567 ************************************ 00:05:31.567 START TEST thread_poller_perf 00:05:31.567 ************************************ 00:05:31.567 12:00:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.567 [2024-07-22 12:00:39.356725] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:31.567 [2024-07-22 12:00:39.356785] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866068 ] 00:05:31.567 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.567 [2024-07-22 12:00:39.389711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.567 [2024-07-22 12:00:39.419467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.823 [2024-07-22 12:00:39.510015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.823 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:32.754 ====================================== 00:05:32.754 busy:2707785092 (cyc) 00:05:32.754 total_run_count: 293000 00:05:32.754 tsc_hz: 2700000000 (cyc) 00:05:32.754 ====================================== 00:05:32.754 poller_cost: 9241 (cyc), 3422 (nsec) 00:05:32.754 00:05:32.754 real 0m1.255s 00:05:32.754 user 0m1.170s 00:05:32.754 sys 0m0.079s 00:05:32.754 12:00:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.754 12:00:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.754 ************************************ 00:05:32.754 END TEST thread_poller_perf 00:05:32.754 ************************************ 00:05:32.754 12:00:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:32.754 12:00:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.754 12:00:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:32.754 12:00:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.754 12:00:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.754 ************************************ 00:05:32.754 START TEST thread_poller_perf 00:05:32.754 ************************************ 00:05:32.754 12:00:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.754 [2024-07-22 12:00:40.656435] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:32.754 [2024-07-22 12:00:40.656501] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866220 ] 00:05:33.012 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.012 [2024-07-22 12:00:40.690223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.012 [2024-07-22 12:00:40.720045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.012 [2024-07-22 12:00:40.813565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.012 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:33.978 ====================================== 00:05:33.978 busy:2702374216 (cyc) 00:05:33.978 total_run_count: 3860000 00:05:33.978 tsc_hz: 2700000000 (cyc) 00:05:33.978 ====================================== 00:05:33.978 poller_cost: 700 (cyc), 259 (nsec) 00:05:33.978 00:05:33.978 real 0m1.253s 00:05:33.978 user 0m1.156s 00:05:33.978 sys 0m0.092s 00:05:33.978 12:00:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.978 12:00:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.978 ************************************ 00:05:33.978 END TEST thread_poller_perf 00:05:33.978 ************************************ 00:05:34.234 12:00:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:34.234 12:00:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:34.234 00:05:34.234 real 0m2.652s 00:05:34.234 user 0m2.384s 00:05:34.234 sys 0m0.268s 00:05:34.234 12:00:41 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.234 12:00:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.234 ************************************ 00:05:34.234 END TEST thread 00:05:34.234 ************************************ 00:05:34.234 12:00:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.234 12:00:41 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:34.234 12:00:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.234 12:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.234 12:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.234 ************************************ 00:05:34.234 START TEST accel 00:05:34.234 ************************************ 00:05:34.234 12:00:41 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:34.234 * Looking for test storage... 00:05:34.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:34.234 12:00:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:34.234 12:00:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:34.234 12:00:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.234 12:00:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=866447 00:05:34.234 12:00:42 accel -- accel/accel.sh@63 -- # waitforlisten 866447 00:05:34.234 12:00:42 accel -- common/autotest_common.sh@829 -- # '[' -z 866447 ']' 00:05:34.234 12:00:42 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.234 12:00:42 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:34.234 12:00:42 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.234 12:00:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:34.234 12:00:42 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.234 12:00:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.234 12:00:42 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.234 12:00:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.234 12:00:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.234 12:00:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.234 12:00:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.234 12:00:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.234 12:00:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:34.234 12:00:42 accel -- accel/accel.sh@41 -- # jq -r . 00:05:34.234 [2024-07-22 12:00:42.070793] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:34.234 [2024-07-22 12:00:42.070882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866447 ] 00:05:34.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.234 [2024-07-22 12:00:42.104472] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.234 [2024-07-22 12:00:42.135099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.493 [2024-07-22 12:00:42.226046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@862 -- # return 0 00:05:34.788 12:00:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:34.788 12:00:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:34.788 12:00:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:34.788 12:00:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:34.788 12:00:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:34.788 12:00:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.788 12:00:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.788 12:00:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.788 12:00:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.788 12:00:42 accel -- accel/accel.sh@75 -- # killprocess 866447 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@948 -- # '[' -z 866447 ']' 00:05:34.788 12:00:42 accel -- common/autotest_common.sh@952 -- # kill -0 866447 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@953 -- # uname 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 866447 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 866447' 00:05:34.789 killing process with pid 866447 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@967 -- # kill 866447 00:05:34.789 12:00:42 accel -- common/autotest_common.sh@972 -- # wait 866447 00:05:35.046 12:00:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:35.047 12:00:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:35.047 12:00:42 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:35.047 12:00:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.047 12:00:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.304 12:00:42 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:35.304 12:00:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:35.304 12:00:42 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.304 12:00:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:35.304 12:00:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.304 12:00:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:35.304 12:00:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:35.304 12:00:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.304 12:00:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.304 ************************************ 00:05:35.304 START TEST accel_missing_filename 00:05:35.304 ************************************ 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.304 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:35.304 12:00:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:35.304 [2024-07-22 12:00:43.060173] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:35.304 [2024-07-22 12:00:43.060241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866589 ] 00:05:35.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.304 [2024-07-22 12:00:43.091781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.304 [2024-07-22 12:00:43.122179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.304 [2024-07-22 12:00:43.214107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.563 [2024-07-22 12:00:43.275185] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.563 [2024-07-22 12:00:43.363451] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:35.563 A filename is required. 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:35.563 00:05:35.563 real 0m0.401s 00:05:35.563 user 0m0.286s 00:05:35.563 sys 0m0.146s 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.563 12:00:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:35.563 ************************************ 00:05:35.563 END TEST accel_missing_filename 00:05:35.563 ************************************ 00:05:35.563 12:00:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.563 12:00:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.563 12:00:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:35.563 12:00:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.563 12:00:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.563 ************************************ 00:05:35.563 START TEST accel_compress_verify 00:05:35.563 ************************************ 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.563 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.563 12:00:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:35.563 12:00:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:35.563 12:00:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.822 12:00:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.822 12:00:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.822 12:00:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.822 12:00:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.822 12:00:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:35.822 12:00:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:35.822 [2024-07-22 12:00:43.510366] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:35.822 [2024-07-22 12:00:43.510447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866725 ] 00:05:35.822 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.822 [2024-07-22 12:00:43.543048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.822 [2024-07-22 12:00:43.574871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.822 [2024-07-22 12:00:43.668131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.822 [2024-07-22 12:00:43.725400] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.080 [2024-07-22 12:00:43.801202] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:36.080 00:05:36.080 Compression does not support the verify option, aborting. 00:05:36.080 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:36.080 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.080 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:36.081 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:36.081 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:36.081 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.081 00:05:36.081 real 0m0.394s 00:05:36.081 user 0m0.276s 00:05:36.081 sys 0m0.149s 00:05:36.081 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.081 12:00:43 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:36.081 ************************************ 00:05:36.081 END TEST accel_compress_verify 00:05:36.081 ************************************ 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.081 12:00:43 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.081 ************************************ 00:05:36.081 START TEST accel_wrong_workload 00:05:36.081 ************************************ 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:36.081 12:00:43 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:36.081 Unsupported workload type: foobar 00:05:36.081 [2024-07-22 12:00:43.951536] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:36.081 accel_perf options: 00:05:36.081 [-h help message] 00:05:36.081 [-q queue depth per core] 00:05:36.081 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:36.081 [-T number of threads per core 00:05:36.081 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:36.081 [-t time in seconds] 00:05:36.081 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:36.081 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:36.081 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:36.081 [-l for compress/decompress workloads, name of uncompressed input file 00:05:36.081 [-S for crc32c workload, use this seed value (default 0) 00:05:36.081 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:36.081 [-f for fill workload, use this BYTE value (default 255) 00:05:36.081 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:36.081 [-y verify result if this switch is on] 00:05:36.081 [-a tasks to allocate per core (default: same value as -q)] 00:05:36.081 Can be used to spread operations across a wider range of memory. 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.081 00:05:36.081 real 0m0.023s 00:05:36.081 user 0m0.011s 00:05:36.081 sys 0m0.012s 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.081 12:00:43 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:36.081 ************************************ 00:05:36.081 END TEST accel_wrong_workload 00:05:36.081 ************************************ 00:05:36.081 Error: writing output failed: Broken pipe 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.081 12:00:43 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.081 12:00:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.081 ************************************ 00:05:36.081 START TEST accel_negative_buffers 00:05:36.081 ************************************ 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:36.081 12:00:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.081 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:36.081 12:00:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:36.340 -x option must be non-negative. 00:05:36.340 [2024-07-22 12:00:44.016597] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:36.340 accel_perf options: 00:05:36.340 [-h help message] 00:05:36.340 [-q queue depth per core] 00:05:36.340 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:36.340 [-T number of threads per core 00:05:36.340 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:36.340 [-t time in seconds] 00:05:36.340 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:36.340 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:36.340 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:36.340 [-l for compress/decompress workloads, name of uncompressed input file 00:05:36.340 [-S for crc32c workload, use this seed value (default 0) 00:05:36.340 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:36.340 [-f for fill workload, use this BYTE value (default 255) 00:05:36.340 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:36.340 [-y verify result if this switch is on] 00:05:36.340 [-a tasks to allocate per core (default: same value as -q)] 00:05:36.340 Can be used to spread operations across a wider range of memory. 00:05:36.340 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:36.340 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.340 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.340 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.340 00:05:36.340 real 0m0.022s 00:05:36.340 user 0m0.011s 00:05:36.340 sys 0m0.010s 00:05:36.340 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.340 12:00:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:36.340 ************************************ 00:05:36.340 END TEST accel_negative_buffers 00:05:36.340 ************************************ 00:05:36.340 Error: writing output failed: Broken pipe 00:05:36.340 12:00:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.340 12:00:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:36.340 12:00:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:36.340 12:00:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.340 12:00:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.340 ************************************ 00:05:36.340 START TEST accel_crc32c 00:05:36.340 ************************************ 00:05:36.340 12:00:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:36.340 12:00:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:36.340 [2024-07-22 12:00:44.086230] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:36.340 [2024-07-22 12:00:44.086294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866802 ] 00:05:36.340 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.340 [2024-07-22 12:00:44.118637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.340 [2024-07-22 12:00:44.150563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.340 [2024-07-22 12:00:44.243828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.598 12:00:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:37.969 12:00:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.969 00:05:37.969 real 0m1.404s 00:05:37.969 user 0m1.260s 00:05:37.969 sys 0m0.147s 00:05:37.969 12:00:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.969 12:00:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:37.969 ************************************ 00:05:37.969 END TEST accel_crc32c 00:05:37.969 ************************************ 00:05:37.969 12:00:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.969 12:00:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:37.969 12:00:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:37.969 12:00:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.969 12:00:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.969 ************************************ 00:05:37.970 START TEST accel_crc32c_C2 00:05:37.970 ************************************ 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:37.970 [2024-07-22 12:00:45.532367] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:37.970 [2024-07-22 12:00:45.532433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867068 ] 00:05:37.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.970 [2024-07-22 12:00:45.563932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.970 [2024-07-22 12:00:45.593922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.970 [2024-07-22 12:00:45.687244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.970 12:00:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.341 00:05:39.341 real 0m1.408s 00:05:39.341 user 0m1.264s 00:05:39.341 sys 0m0.147s 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.341 12:00:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:39.341 ************************************ 00:05:39.341 END TEST accel_crc32c_C2 00:05:39.341 ************************************ 00:05:39.341 12:00:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.341 12:00:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:39.341 12:00:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:39.341 12:00:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.341 12:00:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.341 ************************************ 00:05:39.341 START TEST accel_copy 00:05:39.341 ************************************ 00:05:39.341 12:00:46 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:39.341 12:00:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:39.341 [2024-07-22 12:00:46.985207] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:39.341 [2024-07-22 12:00:46.985270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867227 ] 00:05:39.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.341 [2024-07-22 12:00:47.016937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:39.341 [2024-07-22 12:00:47.046978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.341 [2024-07-22 12:00:47.140689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.341 12:00:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.709 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:40.710 12:00:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.710 00:05:40.710 real 0m1.391s 00:05:40.710 user 0m1.254s 00:05:40.710 sys 0m0.139s 00:05:40.710 12:00:48 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.710 12:00:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:40.710 ************************************ 00:05:40.710 END TEST accel_copy 00:05:40.710 ************************************ 00:05:40.710 12:00:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.710 12:00:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.710 12:00:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:40.710 12:00:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.710 12:00:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.710 ************************************ 00:05:40.710 START TEST accel_fill 00:05:40.710 ************************************ 00:05:40.710 12:00:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:40.710 [2024-07-22 12:00:48.419320] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:40.710 [2024-07-22 12:00:48.419385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867388 ] 00:05:40.710 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.710 [2024-07-22 12:00:48.453112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.710 [2024-07-22 12:00:48.484133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.710 [2024-07-22 12:00:48.577551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.710 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.967 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.968 12:00:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:41.897 12:00:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.897 00:05:41.897 real 0m1.414s 00:05:41.897 user 0m1.269s 00:05:41.897 sys 0m0.147s 00:05:41.897 12:00:49 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.897 12:00:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:41.897 ************************************ 00:05:41.897 END TEST accel_fill 00:05:41.897 ************************************ 00:05:42.155 12:00:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.155 12:00:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:42.155 12:00:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.155 12:00:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.155 12:00:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.155 ************************************ 00:05:42.155 START TEST accel_copy_crc32c 00:05:42.155 ************************************ 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:42.155 12:00:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:42.155 [2024-07-22 12:00:49.879278] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:42.155 [2024-07-22 12:00:49.879342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867541 ] 00:05:42.155 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.155 [2024-07-22 12:00:49.914398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.155 [2024-07-22 12:00:49.944174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.155 [2024-07-22 12:00:50.044361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.413 12:00:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.347 00:05:43.347 real 0m1.409s 00:05:43.347 user 0m1.264s 00:05:43.347 sys 0m0.148s 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.347 12:00:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:43.347 ************************************ 00:05:43.347 END TEST accel_copy_crc32c 00:05:43.347 ************************************ 00:05:43.604 12:00:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.604 12:00:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:43.604 12:00:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:43.604 12:00:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.604 12:00:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.604 ************************************ 00:05:43.604 START TEST accel_copy_crc32c_C2 00:05:43.604 ************************************ 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.604 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:43.604 [2024-07-22 12:00:51.342700] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:43.604 [2024-07-22 12:00:51.342764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867813 ] 00:05:43.604 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.604 [2024-07-22 12:00:51.374656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.604 [2024-07-22 12:00:51.406816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.604 [2024-07-22 12:00:51.498811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.860 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.861 12:00:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.232 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.233 00:05:45.233 real 0m1.413s 00:05:45.233 user 0m1.270s 00:05:45.233 sys 0m0.146s 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.233 12:00:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:45.233 ************************************ 00:05:45.233 END TEST accel_copy_crc32c_C2 00:05:45.233 ************************************ 00:05:45.233 12:00:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.233 12:00:52 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:45.233 12:00:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:45.233 12:00:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.233 12:00:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.233 ************************************ 00:05:45.233 START TEST accel_dualcast 00:05:45.233 ************************************ 00:05:45.233 12:00:52 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:45.234 12:00:52 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:45.234 [2024-07-22 12:00:52.799366] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:45.234 [2024-07-22 12:00:52.799435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867968 ] 00:05:45.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.234 [2024-07-22 12:00:52.831168] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.234 [2024-07-22 12:00:52.856840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.234 [2024-07-22 12:00:52.947700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.234 12:00:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:46.606 12:00:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.606 00:05:46.606 real 0m1.387s 00:05:46.606 user 0m1.249s 00:05:46.606 sys 0m0.140s 00:05:46.606 12:00:54 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.606 12:00:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:46.606 ************************************ 00:05:46.606 END TEST accel_dualcast 00:05:46.606 ************************************ 00:05:46.606 12:00:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.606 12:00:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:46.606 12:00:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.606 12:00:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.606 12:00:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.606 ************************************ 00:05:46.606 START TEST accel_compare 00:05:46.606 ************************************ 00:05:46.606 12:00:54 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:46.606 [2024-07-22 12:00:54.231935] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:46.606 [2024-07-22 12:00:54.231998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868133 ] 00:05:46.606 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.606 [2024-07-22 12:00:54.264095] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.606 [2024-07-22 12:00:54.294061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.606 [2024-07-22 12:00:54.387176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.606 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.607 12:00:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.973 12:00:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.974 12:00:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.974 12:00:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:47.974 12:00:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.974 00:05:47.974 real 0m1.405s 00:05:47.974 user 0m1.261s 00:05:47.974 sys 0m0.147s 00:05:47.974 12:00:55 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.974 12:00:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:47.974 ************************************ 00:05:47.974 END TEST accel_compare 00:05:47.974 ************************************ 00:05:47.974 12:00:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.974 12:00:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:47.974 12:00:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.974 12:00:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.974 12:00:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.974 ************************************ 00:05:47.974 START TEST accel_xor 00:05:47.974 ************************************ 00:05:47.974 12:00:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:47.974 12:00:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:47.974 [2024-07-22 12:00:55.691605] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:47.974 [2024-07-22 12:00:55.691697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868319 ] 00:05:47.974 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.974 [2024-07-22 12:00:55.724342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.974 [2024-07-22 12:00:55.757118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.974 [2024-07-22 12:00:55.850275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.232 12:00:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:49.165 12:00:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.165 00:05:49.165 real 0m1.416s 00:05:49.165 user 0m1.270s 00:05:49.165 sys 0m0.148s 00:05:49.165 12:00:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.165 12:00:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:49.165 ************************************ 00:05:49.165 END TEST accel_xor 00:05:49.165 ************************************ 00:05:49.424 12:00:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.424 12:00:57 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:49.424 12:00:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:49.424 12:00:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.424 12:00:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.424 ************************************ 00:05:49.424 START TEST accel_xor 00:05:49.424 ************************************ 00:05:49.424 12:00:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:49.424 12:00:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:49.424 [2024-07-22 12:00:57.155465] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:49.424 [2024-07-22 12:00:57.155529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868558 ] 00:05:49.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.424 [2024-07-22 12:00:57.188096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.424 [2024-07-22 12:00:57.217847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.424 [2024-07-22 12:00:57.311143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.713 12:00:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:50.649 12:00:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.649 00:05:50.649 real 0m1.401s 00:05:50.649 user 0m1.257s 00:05:50.649 sys 0m0.147s 00:05:50.649 12:00:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.649 12:00:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:50.649 ************************************ 00:05:50.649 END TEST accel_xor 00:05:50.649 ************************************ 00:05:50.649 12:00:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.649 12:00:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:50.649 12:00:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:50.649 12:00:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.649 12:00:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.907 ************************************ 00:05:50.907 START TEST accel_dif_verify 00:05:50.907 ************************************ 00:05:50.907 12:00:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:50.907 [2024-07-22 12:00:58.601479] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:50.907 [2024-07-22 12:00:58.601545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868722 ] 00:05:50.907 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.907 [2024-07-22 12:00:58.633004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.907 [2024-07-22 12:00:58.662521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.907 [2024-07-22 12:00:58.755542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.907 12:00:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:52.279 12:00:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.279 00:05:52.279 real 0m1.412s 00:05:52.279 user 0m1.269s 00:05:52.279 sys 0m0.147s 00:05:52.279 12:00:59 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.279 12:00:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:52.279 ************************************ 00:05:52.279 END TEST accel_dif_verify 00:05:52.279 ************************************ 00:05:52.279 12:01:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.279 12:01:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:52.279 12:01:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:52.279 12:01:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.279 12:01:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.279 ************************************ 00:05:52.279 START TEST accel_dif_generate 00:05:52.279 ************************************ 00:05:52.279 12:01:00 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:52.279 12:01:00 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:52.279 [2024-07-22 12:01:00.062550] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:52.279 [2024-07-22 12:01:00.062628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868877 ] 00:05:52.279 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.279 [2024-07-22 12:01:00.096330] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.279 [2024-07-22 12:01:00.129905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.537 [2024-07-22 12:01:00.224666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.537 12:01:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.905 12:01:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.905 12:01:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.905 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.905 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.905 12:01:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:53.906 12:01:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.906 00:05:53.906 real 0m1.407s 00:05:53.906 user 0m1.266s 00:05:53.906 sys 0m0.145s 00:05:53.906 12:01:01 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.906 12:01:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:53.906 ************************************ 00:05:53.906 END TEST accel_dif_generate 00:05:53.906 ************************************ 00:05:53.906 12:01:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.906 12:01:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:53.906 12:01:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:53.906 12:01:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.906 12:01:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.906 ************************************ 00:05:53.906 START TEST accel_dif_generate_copy 00:05:53.906 ************************************ 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:53.906 [2024-07-22 12:01:01.511927] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:53.906 [2024-07-22 12:01:01.512001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869149 ] 00:05:53.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.906 [2024-07-22 12:01:01.544491] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.906 [2024-07-22 12:01:01.574404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.906 [2024-07-22 12:01:01.667687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.906 12:01:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.277 00:05:55.277 real 0m1.409s 00:05:55.277 user 0m1.267s 00:05:55.277 sys 0m0.146s 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.277 12:01:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:55.277 ************************************ 00:05:55.277 END TEST accel_dif_generate_copy 00:05:55.277 ************************************ 00:05:55.277 12:01:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.277 12:01:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:55.277 12:01:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.277 12:01:02 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:55.277 12:01:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.277 12:01:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.277 ************************************ 00:05:55.277 START TEST accel_comp 00:05:55.277 ************************************ 00:05:55.277 12:01:02 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:55.277 12:01:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:55.277 [2024-07-22 12:01:02.967490] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:55.277 [2024-07-22 12:01:02.967553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869302 ] 00:05:55.277 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.277 [2024-07-22 12:01:03.000471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.277 [2024-07-22 12:01:03.030477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.277 [2024-07-22 12:01:03.122331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.277 12:01:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:56.674 12:01:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.674 00:05:56.674 real 0m1.398s 00:05:56.674 user 0m1.254s 00:05:56.674 sys 0m0.146s 00:05:56.674 12:01:04 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.674 12:01:04 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:56.674 ************************************ 00:05:56.674 END TEST accel_comp 00:05:56.674 ************************************ 00:05:56.674 12:01:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.674 12:01:04 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.674 12:01:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.674 12:01:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.674 12:01:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.674 ************************************ 00:05:56.674 START TEST accel_decomp 00:05:56.674 ************************************ 00:05:56.674 12:01:04 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:56.674 12:01:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:56.674 [2024-07-22 12:01:04.408707] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:56.674 [2024-07-22 12:01:04.408766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869468 ] 00:05:56.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.674 [2024-07-22 12:01:04.439962] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.674 [2024-07-22 12:01:04.470235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.674 [2024-07-22 12:01:04.563836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.930 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 12:01:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.301 12:01:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.301 00:05:58.301 real 0m1.411s 00:05:58.301 user 0m1.270s 00:05:58.301 sys 0m0.144s 00:05:58.301 12:01:05 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.301 12:01:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:58.301 ************************************ 00:05:58.301 END TEST accel_decomp 00:05:58.301 ************************************ 00:05:58.301 12:01:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.301 12:01:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.301 12:01:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:58.301 12:01:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.301 12:01:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.301 ************************************ 00:05:58.301 START TEST accel_decomp_full 00:05:58.302 ************************************ 00:05:58.302 12:01:05 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:58.302 12:01:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:58.302 [2024-07-22 12:01:05.874384] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:58.302 [2024-07-22 12:01:05.874449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869622 ] 00:05:58.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.302 [2024-07-22 12:01:05.906551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.302 [2024-07-22 12:01:05.938853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.302 [2024-07-22 12:01:06.028713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.302 12:01:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.671 12:01:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.671 00:05:59.671 real 0m1.422s 00:05:59.671 user 0m1.284s 00:05:59.671 sys 0m0.141s 00:05:59.671 12:01:07 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.671 12:01:07 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:59.671 ************************************ 00:05:59.671 END TEST accel_decomp_full 00:05:59.671 ************************************ 00:05:59.671 12:01:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.671 12:01:07 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:59.671 12:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:59.671 12:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.671 12:01:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.671 ************************************ 00:05:59.671 START TEST accel_decomp_mcore 00:05:59.671 ************************************ 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:59.671 [2024-07-22 12:01:07.340510] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:05:59.671 [2024-07-22 12:01:07.340573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869894 ] 00:05:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.671 [2024-07-22 12:01:07.376642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.671 [2024-07-22 12:01:07.406680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.671 [2024-07-22 12:01:07.503625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.671 [2024-07-22 12:01:07.503664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.671 [2024-07-22 12:01:07.503779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.671 [2024-07-22 12:01:07.503782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.671 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.672 12:01:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.042 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.042 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.042 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.042 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.042 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.042 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.043 00:06:01.043 real 0m1.423s 00:06:01.043 user 0m4.724s 00:06:01.043 sys 0m0.159s 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.043 12:01:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:01.043 ************************************ 00:06:01.043 END TEST accel_decomp_mcore 00:06:01.043 ************************************ 00:06:01.043 12:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.043 12:01:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.043 12:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:01.043 12:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.043 12:01:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.043 ************************************ 00:06:01.043 START TEST accel_decomp_full_mcore 00:06:01.043 ************************************ 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:01.043 12:01:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:01.043 [2024-07-22 12:01:08.807052] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:01.043 [2024-07-22 12:01:08.807112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870052 ] 00:06:01.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.043 [2024-07-22 12:01:08.841587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.043 [2024-07-22 12:01:08.872047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.043 [2024-07-22 12:01:08.965360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.043 [2024-07-22 12:01:08.965411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.043 [2024-07-22 12:01:08.965524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.043 [2024-07-22 12:01:08.965526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.300 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.301 12:01:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.670 00:06:02.670 real 0m1.420s 00:06:02.670 user 0m4.727s 00:06:02.670 sys 0m0.151s 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.670 12:01:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:02.670 ************************************ 00:06:02.670 END TEST accel_decomp_full_mcore 00:06:02.670 ************************************ 00:06:02.670 12:01:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.670 12:01:10 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.670 12:01:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:02.670 12:01:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.670 12:01:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.670 ************************************ 00:06:02.670 START TEST accel_decomp_mthread 00:06:02.670 ************************************ 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:02.670 [2024-07-22 12:01:10.270515] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:02.670 [2024-07-22 12:01:10.270580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870220 ] 00:06:02.670 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.670 [2024-07-22 12:01:10.302933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.670 [2024-07-22 12:01:10.332914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.670 [2024-07-22 12:01:10.426562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.670 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.671 12:01:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.038 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.038 12:01:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.038 00:06:04.038 real 0m1.412s 00:06:04.038 user 0m1.271s 00:06:04.038 sys 0m0.144s 00:06:04.038 12:01:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.038 12:01:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:04.038 ************************************ 00:06:04.038 END TEST accel_decomp_mthread 00:06:04.038 ************************************ 00:06:04.038 12:01:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.038 12:01:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.038 12:01:11 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:04.038 12:01:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.038 12:01:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.038 ************************************ 00:06:04.038 START TEST accel_decomp_full_mthread 00:06:04.038 ************************************ 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:04.038 [2024-07-22 12:01:11.737017] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:04.038 [2024-07-22 12:01:11.737082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870488 ] 00:06:04.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.038 [2024-07-22 12:01:11.770171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.038 [2024-07-22 12:01:11.802653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.038 [2024-07-22 12:01:11.895714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.038 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.294 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.294 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.294 12:01:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.684 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.685 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.685 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.685 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.685 12:01:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.685 00:06:05.685 real 0m1.452s 00:06:05.685 user 0m1.300s 00:06:05.685 sys 0m0.155s 00:06:05.685 12:01:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.685 12:01:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:05.685 ************************************ 00:06:05.685 END TEST accel_decomp_full_mthread 00:06:05.685 ************************************ 00:06:05.685 12:01:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.685 12:01:13 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:05.685 12:01:13 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:05.685 12:01:13 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:05.685 12:01:13 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:05.685 12:01:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.685 12:01:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.685 12:01:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.685 12:01:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.685 12:01:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.685 12:01:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.685 12:01:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.685 12:01:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.685 12:01:13 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.685 ************************************ 00:06:05.685 START TEST accel_dif_functional_tests 00:06:05.685 ************************************ 00:06:05.685 12:01:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:05.685 [2024-07-22 12:01:13.256723] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:05.685 [2024-07-22 12:01:13.256799] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870640 ] 00:06:05.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.685 [2024-07-22 12:01:13.287939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.685 [2024-07-22 12:01:13.317756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.685 [2024-07-22 12:01:13.411667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.685 [2024-07-22 12:01:13.411720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.685 [2024-07-22 12:01:13.411737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.685 00:06:05.685 00:06:05.685 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.685 http://cunit.sourceforge.net/ 00:06:05.685 00:06:05.685 00:06:05.685 Suite: accel_dif 00:06:05.685 Test: verify: DIF generated, GUARD check ...passed 00:06:05.685 Test: verify: DIF generated, APPTAG check ...passed 00:06:05.685 Test: verify: DIF generated, REFTAG check ...passed 00:06:05.685 Test: verify: DIF not generated, GUARD check ...[2024-07-22 12:01:13.497887] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:05.685 passed 00:06:05.685 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 12:01:13.497965] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:05.685 passed 00:06:05.685 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 12:01:13.498012] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:05.685 passed 00:06:05.685 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:05.685 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 12:01:13.498086] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:05.685 passed 00:06:05.685 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:05.685 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:05.685 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:05.685 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 12:01:13.498218] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:05.685 passed 00:06:05.685 Test: verify copy: DIF generated, GUARD check ...passed 00:06:05.685 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:05.685 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:05.685 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 12:01:13.498382] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:05.685 passed 00:06:05.685 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 12:01:13.498416] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:05.685 passed 00:06:05.685 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 12:01:13.498449] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:05.685 passed 00:06:05.685 Test: generate copy: DIF generated, GUARD check ...passed 00:06:05.685 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:05.685 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:05.685 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:05.685 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:05.685 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:05.685 Test: generate copy: iovecs-len validate ...[2024-07-22 12:01:13.498694] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:05.685 passed 00:06:05.685 Test: generate copy: buffer alignment validate ...passed 00:06:05.685 00:06:05.685 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.685 suites 1 1 n/a 0 0 00:06:05.685 tests 26 26 26 0 0 00:06:05.685 asserts 115 115 115 0 n/a 00:06:05.685 00:06:05.685 Elapsed time = 0.002 seconds 00:06:05.943 00:06:05.943 real 0m0.487s 00:06:05.943 user 0m0.733s 00:06:05.943 sys 0m0.177s 00:06:05.943 12:01:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.943 12:01:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:05.943 ************************************ 00:06:05.943 END TEST accel_dif_functional_tests 00:06:05.943 ************************************ 00:06:05.943 12:01:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.943 00:06:05.943 real 0m31.760s 00:06:05.943 user 0m35.061s 00:06:05.943 sys 0m4.635s 00:06:05.943 12:01:13 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.943 12:01:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.943 ************************************ 00:06:05.943 END TEST accel 00:06:05.943 ************************************ 00:06:05.943 12:01:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.943 12:01:13 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:05.943 12:01:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.943 12:01:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.943 12:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:05.943 ************************************ 00:06:05.943 START TEST accel_rpc 00:06:05.943 ************************************ 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:05.943 * Looking for test storage... 00:06:05.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:05.943 12:01:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.943 12:01:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=870722 00:06:05.943 12:01:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:05.943 12:01:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 870722 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 870722 ']' 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.943 12:01:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.201 [2024-07-22 12:01:13.879567] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:06.201 [2024-07-22 12:01:13.879671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870722 ] 00:06:06.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.201 [2024-07-22 12:01:13.912647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.201 [2024-07-22 12:01:13.941333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.201 [2024-07-22 12:01:14.027743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.201 12:01:14 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.201 12:01:14 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.201 12:01:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:06.201 12:01:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:06.201 12:01:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:06.201 12:01:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:06.201 12:01:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:06.201 12:01:14 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.201 12:01:14 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.201 12:01:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.201 ************************************ 00:06:06.201 START TEST accel_assign_opcode 00:06:06.201 ************************************ 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.201 [2024-07-22 12:01:14.120433] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.201 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.201 [2024-07-22 12:01:14.128460] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:06.458 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.715 software 00:06:06.715 00:06:06.715 real 0m0.301s 00:06:06.715 user 0m0.037s 00:06:06.715 sys 0m0.011s 00:06:06.715 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.715 12:01:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.715 ************************************ 00:06:06.715 END TEST accel_assign_opcode 00:06:06.715 ************************************ 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.715 12:01:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 870722 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 870722 ']' 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 870722 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870722 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870722' 00:06:06.715 killing process with pid 870722 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 870722 00:06:06.715 12:01:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 870722 00:06:06.973 00:06:06.973 real 0m1.075s 00:06:06.973 user 0m1.002s 00:06:06.973 sys 0m0.440s 00:06:06.973 12:01:14 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.973 12:01:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.973 ************************************ 00:06:06.973 END TEST accel_rpc 00:06:06.973 ************************************ 00:06:06.973 12:01:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.973 12:01:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:06.973 12:01:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.973 12:01:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.973 12:01:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.973 ************************************ 00:06:06.973 START TEST app_cmdline 00:06:06.973 ************************************ 00:06:06.973 12:01:14 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.242 * Looking for test storage... 00:06:07.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:07.242 12:01:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.242 12:01:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=870926 00:06:07.242 12:01:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.242 12:01:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 870926 00:06:07.242 12:01:14 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 870926 ']' 00:06:07.242 12:01:14 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.242 12:01:14 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.242 12:01:14 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.242 12:01:14 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.242 12:01:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.242 [2024-07-22 12:01:15.007129] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:07.242 [2024-07-22 12:01:15.007223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870926 ] 00:06:07.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.242 [2024-07-22 12:01:15.040496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.242 [2024-07-22 12:01:15.067271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.242 [2024-07-22 12:01:15.151542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.499 12:01:15 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.499 12:01:15 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:07.499 12:01:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:07.756 { 00:06:07.756 "version": "SPDK v24.09-pre git sha1 8fb860b73", 00:06:07.756 "fields": { 00:06:07.756 "major": 24, 00:06:07.756 "minor": 9, 00:06:07.756 "patch": 0, 00:06:07.756 "suffix": "-pre", 00:06:07.756 "commit": "8fb860b73" 00:06:07.756 } 00:06:07.757 } 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.757 12:01:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.757 12:01:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.757 12:01:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.757 12:01:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.014 12:01:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.014 12:01:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.014 12:01:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.014 request: 00:06:08.014 { 00:06:08.014 "method": "env_dpdk_get_mem_stats", 00:06:08.014 "req_id": 1 00:06:08.014 } 00:06:08.014 Got JSON-RPC error response 00:06:08.014 response: 00:06:08.014 { 00:06:08.014 "code": -32601, 00:06:08.014 "message": "Method not found" 00:06:08.014 } 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.014 12:01:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 870926 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 870926 ']' 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 870926 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.014 12:01:15 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870926 00:06:08.272 12:01:15 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.272 12:01:15 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.272 12:01:15 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870926' 00:06:08.272 killing process with pid 870926 00:06:08.272 12:01:15 app_cmdline -- common/autotest_common.sh@967 -- # kill 870926 00:06:08.272 12:01:15 app_cmdline -- common/autotest_common.sh@972 -- # wait 870926 00:06:08.530 00:06:08.530 real 0m1.462s 00:06:08.530 user 0m1.799s 00:06:08.530 sys 0m0.447s 00:06:08.530 12:01:16 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.530 12:01:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.530 ************************************ 00:06:08.530 END TEST app_cmdline 00:06:08.530 ************************************ 00:06:08.530 12:01:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.530 12:01:16 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:08.531 12:01:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.531 12:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.531 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.531 ************************************ 00:06:08.531 START TEST version 00:06:08.531 ************************************ 00:06:08.531 12:01:16 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:08.788 * Looking for test storage... 00:06:08.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:08.788 12:01:16 version -- app/version.sh@17 -- # get_header_version major 00:06:08.788 12:01:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # cut -f2 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.788 12:01:16 version -- app/version.sh@17 -- # major=24 00:06:08.788 12:01:16 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.788 12:01:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # cut -f2 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.788 12:01:16 version -- app/version.sh@18 -- # minor=9 00:06:08.788 12:01:16 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.788 12:01:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # cut -f2 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.788 12:01:16 version -- app/version.sh@19 -- # patch=0 00:06:08.788 12:01:16 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.788 12:01:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # cut -f2 00:06:08.788 12:01:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.788 12:01:16 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.788 12:01:16 version -- app/version.sh@22 -- # version=24.9 00:06:08.788 12:01:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.788 12:01:16 version -- app/version.sh@28 -- # version=24.9rc0 00:06:08.788 12:01:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:08.788 12:01:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.788 12:01:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:08.788 12:01:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:08.788 00:06:08.788 real 0m0.105s 00:06:08.788 user 0m0.048s 00:06:08.788 sys 0m0.079s 00:06:08.788 12:01:16 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.788 12:01:16 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.788 ************************************ 00:06:08.788 END TEST version 00:06:08.788 ************************************ 00:06:08.788 12:01:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.788 12:01:16 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@198 -- # uname -s 00:06:08.788 12:01:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:08.788 12:01:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:08.788 12:01:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:08.788 12:01:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:08.788 12:01:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.788 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.788 12:01:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:08.788 12:01:16 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:08.788 12:01:16 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:08.788 12:01:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:08.788 12:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.788 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.788 ************************************ 00:06:08.788 START TEST nvmf_tcp 00:06:08.788 ************************************ 00:06:08.788 12:01:16 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:08.788 * Looking for test storage... 00:06:08.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.788 12:01:16 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.789 12:01:16 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.789 12:01:16 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.789 12:01:16 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.789 12:01:16 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.789 12:01:16 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.789 12:01:16 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.789 12:01:16 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:08.789 12:01:16 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:08.789 12:01:16 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.789 12:01:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:08.789 12:01:16 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:08.789 12:01:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:08.789 12:01:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.789 12:01:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 ************************************ 00:06:08.789 START TEST nvmf_example 00:06:08.789 ************************************ 00:06:08.789 12:01:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:08.789 * Looking for test storage... 00:06:09.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.046 12:01:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:09.047 12:01:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:10.943 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:10.943 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:10.943 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:10.943 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:10.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:06:10.943 00:06:10.943 --- 10.0.0.2 ping statistics --- 00:06:10.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.943 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:06:10.943 00:06:10.943 --- 10.0.0.1 ping statistics --- 00:06:10.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.943 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=872938 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 872938 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 872938 ']' 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.943 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.944 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.944 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.944 12:01:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:10.944 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:11.888 12:01:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:11.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.078 Initializing NVMe Controllers 00:06:24.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:24.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:24.078 Initialization complete. Launching workers. 00:06:24.078 ======================================================== 00:06:24.078 Latency(us) 00:06:24.078 Device Information : IOPS MiB/s Average min max 00:06:24.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15322.00 59.85 4179.23 881.31 20233.22 00:06:24.078 ======================================================== 00:06:24.078 Total : 15322.00 59.85 4179.23 881.31 20233.22 00:06:24.078 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:24.078 rmmod nvme_tcp 00:06:24.078 rmmod nvme_fabrics 00:06:24.078 rmmod nvme_keyring 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 872938 ']' 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 872938 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 872938 ']' 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 872938 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872938 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872938' 00:06:24.078 killing process with pid 872938 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 872938 00:06:24.078 12:01:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 872938 00:06:24.078 nvmf threads initialize successfully 00:06:24.078 bdev subsystem init successfully 00:06:24.078 created a nvmf target service 00:06:24.078 create targets's poll groups done 00:06:24.078 all subsystems of target started 00:06:24.078 nvmf target is running 00:06:24.078 all subsystems of target stopped 00:06:24.078 destroy targets's poll groups done 00:06:24.078 destroyed the nvmf target service 00:06:24.078 bdev subsystem finish successfully 00:06:24.078 nvmf threads destroy successfully 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.078 12:01:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.336 12:01:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:24.336 12:01:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:24.336 12:01:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.336 12:01:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 00:06:24.596 real 0m15.613s 00:06:24.596 user 0m44.896s 00:06:24.596 sys 0m3.084s 00:06:24.596 12:01:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.596 12:01:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 ************************************ 00:06:24.596 END TEST nvmf_example 00:06:24.596 ************************************ 00:06:24.596 12:01:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:24.596 12:01:32 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:24.596 12:01:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.596 12:01:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.596 12:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 ************************************ 00:06:24.596 START TEST nvmf_filesystem 00:06:24.596 ************************************ 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:24.596 * Looking for test storage... 00:06:24.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:24.596 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:24.596 #define SPDK_CONFIG_H 00:06:24.596 #define SPDK_CONFIG_APPS 1 00:06:24.596 #define SPDK_CONFIG_ARCH native 00:06:24.596 #undef SPDK_CONFIG_ASAN 00:06:24.596 #undef SPDK_CONFIG_AVAHI 00:06:24.596 #undef SPDK_CONFIG_CET 00:06:24.596 #define SPDK_CONFIG_COVERAGE 1 00:06:24.596 #define SPDK_CONFIG_CROSS_PREFIX 00:06:24.596 #undef SPDK_CONFIG_CRYPTO 00:06:24.596 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:24.596 #undef SPDK_CONFIG_CUSTOMOCF 00:06:24.596 #undef SPDK_CONFIG_DAOS 00:06:24.596 #define SPDK_CONFIG_DAOS_DIR 00:06:24.596 #define SPDK_CONFIG_DEBUG 1 00:06:24.596 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:24.596 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:24.596 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:24.596 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:24.596 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:24.596 #undef SPDK_CONFIG_DPDK_UADK 00:06:24.596 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:24.596 #define SPDK_CONFIG_EXAMPLES 1 00:06:24.596 #undef SPDK_CONFIG_FC 00:06:24.596 #define SPDK_CONFIG_FC_PATH 00:06:24.596 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:24.596 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:24.596 #undef SPDK_CONFIG_FUSE 00:06:24.596 #undef SPDK_CONFIG_FUZZER 00:06:24.596 #define SPDK_CONFIG_FUZZER_LIB 00:06:24.596 #undef SPDK_CONFIG_GOLANG 00:06:24.596 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:24.596 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:24.596 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:24.596 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:24.596 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:24.596 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:24.596 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:24.596 #define SPDK_CONFIG_IDXD 1 00:06:24.596 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:24.596 #undef SPDK_CONFIG_IPSEC_MB 00:06:24.596 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:24.596 #define SPDK_CONFIG_ISAL 1 00:06:24.596 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:24.596 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:24.596 #define SPDK_CONFIG_LIBDIR 00:06:24.596 #undef SPDK_CONFIG_LTO 00:06:24.596 #define SPDK_CONFIG_MAX_LCORES 128 00:06:24.596 #define SPDK_CONFIG_NVME_CUSE 1 00:06:24.596 #undef SPDK_CONFIG_OCF 00:06:24.596 #define SPDK_CONFIG_OCF_PATH 00:06:24.596 #define SPDK_CONFIG_OPENSSL_PATH 00:06:24.596 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:24.596 #define SPDK_CONFIG_PGO_DIR 00:06:24.596 #undef SPDK_CONFIG_PGO_USE 00:06:24.596 #define SPDK_CONFIG_PREFIX /usr/local 00:06:24.596 #undef SPDK_CONFIG_RAID5F 00:06:24.596 #undef SPDK_CONFIG_RBD 00:06:24.596 #define SPDK_CONFIG_RDMA 1 00:06:24.596 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:24.596 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:24.596 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:24.596 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:24.596 #define SPDK_CONFIG_SHARED 1 00:06:24.596 #undef SPDK_CONFIG_SMA 00:06:24.596 #define SPDK_CONFIG_TESTS 1 00:06:24.596 #undef SPDK_CONFIG_TSAN 00:06:24.596 #define SPDK_CONFIG_UBLK 1 00:06:24.597 #define SPDK_CONFIG_UBSAN 1 00:06:24.597 #undef SPDK_CONFIG_UNIT_TESTS 00:06:24.597 #undef SPDK_CONFIG_URING 00:06:24.597 #define SPDK_CONFIG_URING_PATH 00:06:24.597 #undef SPDK_CONFIG_URING_ZNS 00:06:24.597 #undef SPDK_CONFIG_USDT 00:06:24.597 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:24.597 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:24.597 #define SPDK_CONFIG_VFIO_USER 1 00:06:24.597 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:24.597 #define SPDK_CONFIG_VHOST 1 00:06:24.597 #define SPDK_CONFIG_VIRTIO 1 00:06:24.597 #undef SPDK_CONFIG_VTUNE 00:06:24.597 #define SPDK_CONFIG_VTUNE_DIR 00:06:24.597 #define SPDK_CONFIG_WERROR 1 00:06:24.597 #define SPDK_CONFIG_WPDK_DIR 00:06:24.597 #undef SPDK_CONFIG_XNVME 00:06:24.597 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:24.597 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 874646 ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 874646 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.A4V7yu 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.A4V7yu/tests/target /tmp/spdk.A4V7yu 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=54025179136 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7969529856 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996459520 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=897024 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:24.598 * Looking for test storage... 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=54025179136 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10184122368 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.598 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:24.599 12:01:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:27.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:27.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:27.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:27.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.156 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:27.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:06:27.156 00:06:27.157 --- 10.0.0.2 ping statistics --- 00:06:27.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.157 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:06:27.157 00:06:27.157 --- 10.0.0.1 ping statistics --- 00:06:27.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.157 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.157 ************************************ 00:06:27.157 START TEST nvmf_filesystem_no_in_capsule 00:06:27.157 ************************************ 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=876274 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 876274 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 876274 ']' 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.157 [2024-07-22 12:01:34.700857] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:27.157 [2024-07-22 12:01:34.700949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.157 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.157 [2024-07-22 12:01:34.738847] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.157 [2024-07-22 12:01:34.770568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.157 [2024-07-22 12:01:34.863507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.157 [2024-07-22 12:01:34.863568] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.157 [2024-07-22 12:01:34.863584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.157 [2024-07-22 12:01:34.863597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.157 [2024-07-22 12:01:34.863609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.157 [2024-07-22 12:01:34.863682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.157 [2024-07-22 12:01:34.863740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.157 [2024-07-22 12:01:34.863859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.157 [2024-07-22 12:01:34.863861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.157 12:01:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.157 [2024-07-22 12:01:35.019532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.157 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.414 Malloc1 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.414 [2024-07-22 12:01:35.208587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:27.414 { 00:06:27.414 "name": "Malloc1", 00:06:27.414 "aliases": [ 00:06:27.414 "f38109b5-1e3c-4b3f-981a-9655553837a2" 00:06:27.414 ], 00:06:27.414 "product_name": "Malloc disk", 00:06:27.414 "block_size": 512, 00:06:27.414 "num_blocks": 1048576, 00:06:27.414 "uuid": "f38109b5-1e3c-4b3f-981a-9655553837a2", 00:06:27.414 "assigned_rate_limits": { 00:06:27.414 "rw_ios_per_sec": 0, 00:06:27.414 "rw_mbytes_per_sec": 0, 00:06:27.414 "r_mbytes_per_sec": 0, 00:06:27.414 "w_mbytes_per_sec": 0 00:06:27.414 }, 00:06:27.414 "claimed": true, 00:06:27.414 "claim_type": "exclusive_write", 00:06:27.414 "zoned": false, 00:06:27.414 "supported_io_types": { 00:06:27.414 "read": true, 00:06:27.414 "write": true, 00:06:27.414 "unmap": true, 00:06:27.414 "flush": true, 00:06:27.414 "reset": true, 00:06:27.414 "nvme_admin": false, 00:06:27.414 "nvme_io": false, 00:06:27.414 "nvme_io_md": false, 00:06:27.414 "write_zeroes": true, 00:06:27.414 "zcopy": true, 00:06:27.414 "get_zone_info": false, 00:06:27.414 "zone_management": false, 00:06:27.414 "zone_append": false, 00:06:27.414 "compare": false, 00:06:27.414 "compare_and_write": false, 00:06:27.414 "abort": true, 00:06:27.414 "seek_hole": false, 00:06:27.414 "seek_data": false, 00:06:27.414 "copy": true, 00:06:27.414 "nvme_iov_md": false 00:06:27.414 }, 00:06:27.414 "memory_domains": [ 00:06:27.414 { 00:06:27.414 "dma_device_id": "system", 00:06:27.414 "dma_device_type": 1 00:06:27.414 }, 00:06:27.414 { 00:06:27.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.414 "dma_device_type": 2 00:06:27.414 } 00:06:27.414 ], 00:06:27.414 "driver_specific": {} 00:06:27.414 } 00:06:27.414 ]' 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:27.414 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:28.342 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:28.342 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:28.342 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:28.342 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:28.342 12:01:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:30.274 12:01:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:30.274 12:01:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:30.274 12:01:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:30.274 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:30.531 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:31.095 12:01:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:32.024 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.025 ************************************ 00:06:32.025 START TEST filesystem_ext4 00:06:32.025 ************************************ 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:32.025 12:01:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:32.025 mke2fs 1.46.5 (30-Dec-2021) 00:06:32.281 Discarding device blocks: 0/522240 done 00:06:32.281 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:32.281 Filesystem UUID: fa83d303-defd-460b-8765-655991c2e699 00:06:32.281 Superblock backups stored on blocks: 00:06:32.281 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:32.281 00:06:32.281 Allocating group tables: 0/64 done 00:06:32.281 Writing inode tables: 0/64 done 00:06:32.281 Creating journal (8192 blocks): done 00:06:32.281 Writing superblocks and filesystem accounting information: 0/64 done 00:06:32.281 00:06:32.281 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:32.281 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 876274 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.537 00:06:32.537 real 0m0.549s 00:06:32.537 user 0m0.017s 00:06:32.537 sys 0m0.056s 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:32.537 ************************************ 00:06:32.537 END TEST filesystem_ext4 00:06:32.537 ************************************ 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.537 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.794 ************************************ 00:06:32.794 START TEST filesystem_btrfs 00:06:32.794 ************************************ 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:32.794 btrfs-progs v6.6.2 00:06:32.794 See https://btrfs.readthedocs.io for more information. 00:06:32.794 00:06:32.794 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:32.794 NOTE: several default settings have changed in version 5.15, please make sure 00:06:32.794 this does not affect your deployments: 00:06:32.794 - DUP for metadata (-m dup) 00:06:32.794 - enabled no-holes (-O no-holes) 00:06:32.794 - enabled free-space-tree (-R free-space-tree) 00:06:32.794 00:06:32.794 Label: (null) 00:06:32.794 UUID: 77ce7677-e807-4e05-8f57-c3853a2bc613 00:06:32.794 Node size: 16384 00:06:32.794 Sector size: 4096 00:06:32.794 Filesystem size: 510.00MiB 00:06:32.794 Block group profiles: 00:06:32.794 Data: single 8.00MiB 00:06:32.794 Metadata: DUP 32.00MiB 00:06:32.794 System: DUP 8.00MiB 00:06:32.794 SSD detected: yes 00:06:32.794 Zoned device: no 00:06:32.794 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:32.794 Runtime features: free-space-tree 00:06:32.794 Checksum: crc32c 00:06:32.794 Number of devices: 1 00:06:32.794 Devices: 00:06:32.794 ID SIZE PATH 00:06:32.794 1 510.00MiB /dev/nvme0n1p1 00:06:32.794 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.794 12:01:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 876274 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.739 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.739 00:06:33.739 real 0m0.982s 00:06:33.740 user 0m0.018s 00:06:33.740 sys 0m0.115s 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:33.740 ************************************ 00:06:33.740 END TEST filesystem_btrfs 00:06:33.740 ************************************ 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.740 ************************************ 00:06:33.740 START TEST filesystem_xfs 00:06:33.740 ************************************ 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:33.740 12:01:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:33.740 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:33.740 = sectsz=512 attr=2, projid32bit=1 00:06:33.740 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:33.740 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:33.740 data = bsize=4096 blocks=130560, imaxpct=25 00:06:33.740 = sunit=0 swidth=0 blks 00:06:33.740 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:33.740 log =internal log bsize=4096 blocks=16384, version=2 00:06:33.740 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:33.740 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:35.107 Discarding blocks...Done. 00:06:35.107 12:01:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:35.107 12:01:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 876274 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.629 00:06:37.629 real 0m3.581s 00:06:37.629 user 0m0.016s 00:06:37.629 sys 0m0.060s 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:37.629 ************************************ 00:06:37.629 END TEST filesystem_xfs 00:06:37.629 ************************************ 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:37.629 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:37.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 876274 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 876274 ']' 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 876274 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 876274 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 876274' 00:06:37.888 killing process with pid 876274 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 876274 00:06:37.888 12:01:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 876274 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:38.147 00:06:38.147 real 0m11.394s 00:06:38.147 user 0m43.699s 00:06:38.147 sys 0m1.734s 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 ************************************ 00:06:38.147 END TEST nvmf_filesystem_no_in_capsule 00:06:38.147 ************************************ 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.147 12:01:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 ************************************ 00:06:38.407 START TEST nvmf_filesystem_in_capsule 00:06:38.407 ************************************ 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=877831 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 877831 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 877831 ']' 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.407 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 [2024-07-22 12:01:46.149159] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:38.407 [2024-07-22 12:01:46.149240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.407 [2024-07-22 12:01:46.185230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.407 [2024-07-22 12:01:46.215551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.407 [2024-07-22 12:01:46.304251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.407 [2024-07-22 12:01:46.304313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.407 [2024-07-22 12:01:46.304330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.407 [2024-07-22 12:01:46.304344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.407 [2024-07-22 12:01:46.304355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.407 [2024-07-22 12:01:46.304437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.407 [2024-07-22 12:01:46.304508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.407 [2024-07-22 12:01:46.304608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.407 [2024-07-22 12:01:46.304610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 [2024-07-22 12:01:46.461567] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 Malloc1 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 [2024-07-22 12:01:46.653053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:38.925 { 00:06:38.925 "name": "Malloc1", 00:06:38.925 "aliases": [ 00:06:38.925 "de6a6aac-7dca-427b-8455-fafa29571118" 00:06:38.925 ], 00:06:38.925 "product_name": "Malloc disk", 00:06:38.925 "block_size": 512, 00:06:38.925 "num_blocks": 1048576, 00:06:38.925 "uuid": "de6a6aac-7dca-427b-8455-fafa29571118", 00:06:38.925 "assigned_rate_limits": { 00:06:38.925 "rw_ios_per_sec": 0, 00:06:38.925 "rw_mbytes_per_sec": 0, 00:06:38.925 "r_mbytes_per_sec": 0, 00:06:38.925 "w_mbytes_per_sec": 0 00:06:38.925 }, 00:06:38.925 "claimed": true, 00:06:38.925 "claim_type": "exclusive_write", 00:06:38.925 "zoned": false, 00:06:38.925 "supported_io_types": { 00:06:38.925 "read": true, 00:06:38.925 "write": true, 00:06:38.925 "unmap": true, 00:06:38.925 "flush": true, 00:06:38.925 "reset": true, 00:06:38.925 "nvme_admin": false, 00:06:38.925 "nvme_io": false, 00:06:38.925 "nvme_io_md": false, 00:06:38.925 "write_zeroes": true, 00:06:38.925 "zcopy": true, 00:06:38.925 "get_zone_info": false, 00:06:38.925 "zone_management": false, 00:06:38.925 "zone_append": false, 00:06:38.925 "compare": false, 00:06:38.925 "compare_and_write": false, 00:06:38.925 "abort": true, 00:06:38.925 "seek_hole": false, 00:06:38.925 "seek_data": false, 00:06:38.925 "copy": true, 00:06:38.925 "nvme_iov_md": false 00:06:38.925 }, 00:06:38.925 "memory_domains": [ 00:06:38.925 { 00:06:38.925 "dma_device_id": "system", 00:06:38.925 "dma_device_type": 1 00:06:38.925 }, 00:06:38.925 { 00:06:38.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.925 "dma_device_type": 2 00:06:38.925 } 00:06:38.925 ], 00:06:38.925 "driver_specific": {} 00:06:38.925 } 00:06:38.925 ]' 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:38.925 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:38.926 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:38.926 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:38.926 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:38.926 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:38.926 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:38.926 12:01:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:39.860 12:01:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:39.860 12:01:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:39.860 12:01:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:39.860 12:01:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:39.860 12:01:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:41.753 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:41.753 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:41.754 12:01:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:43.121 12:01:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.052 ************************************ 00:06:44.052 START TEST filesystem_in_capsule_ext4 00:06:44.052 ************************************ 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.052 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:44.053 12:01:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:44.053 mke2fs 1.46.5 (30-Dec-2021) 00:06:44.053 Discarding device blocks: 0/522240 done 00:06:44.053 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:44.053 Filesystem UUID: 8eea20e2-0d46-4c5b-b79e-cd7cb8c30d8f 00:06:44.053 Superblock backups stored on blocks: 00:06:44.053 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:44.053 00:06:44.053 Allocating group tables: 0/64 done 00:06:44.053 Writing inode tables: 0/64 done 00:06:44.982 Creating journal (8192 blocks): done 00:06:45.528 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:06:45.528 00:06:45.528 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:45.528 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 877831 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.786 00:06:45.786 real 0m1.963s 00:06:45.786 user 0m0.018s 00:06:45.786 sys 0m0.052s 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:45.786 ************************************ 00:06:45.786 END TEST filesystem_in_capsule_ext4 00:06:45.786 ************************************ 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.786 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.786 ************************************ 00:06:45.786 START TEST filesystem_in_capsule_btrfs 00:06:45.787 ************************************ 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:45.787 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:46.044 btrfs-progs v6.6.2 00:06:46.044 See https://btrfs.readthedocs.io for more information. 00:06:46.044 00:06:46.044 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:46.044 NOTE: several default settings have changed in version 5.15, please make sure 00:06:46.044 this does not affect your deployments: 00:06:46.044 - DUP for metadata (-m dup) 00:06:46.044 - enabled no-holes (-O no-holes) 00:06:46.044 - enabled free-space-tree (-R free-space-tree) 00:06:46.044 00:06:46.044 Label: (null) 00:06:46.044 UUID: c287c50a-e54c-4310-bf07-43d229c13fa6 00:06:46.044 Node size: 16384 00:06:46.044 Sector size: 4096 00:06:46.044 Filesystem size: 510.00MiB 00:06:46.044 Block group profiles: 00:06:46.044 Data: single 8.00MiB 00:06:46.044 Metadata: DUP 32.00MiB 00:06:46.044 System: DUP 8.00MiB 00:06:46.044 SSD detected: yes 00:06:46.044 Zoned device: no 00:06:46.044 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:46.044 Runtime features: free-space-tree 00:06:46.044 Checksum: crc32c 00:06:46.044 Number of devices: 1 00:06:46.044 Devices: 00:06:46.044 ID SIZE PATH 00:06:46.044 1 510.00MiB /dev/nvme0n1p1 00:06:46.044 00:06:46.044 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:46.044 12:01:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.972 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.972 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:46.972 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.972 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:46.972 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:46.972 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 877831 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.228 00:06:47.228 real 0m1.234s 00:06:47.228 user 0m0.018s 00:06:47.228 sys 0m0.114s 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:47.228 ************************************ 00:06:47.228 END TEST filesystem_in_capsule_btrfs 00:06:47.228 ************************************ 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.228 ************************************ 00:06:47.228 START TEST filesystem_in_capsule_xfs 00:06:47.228 ************************************ 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:47.228 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:47.229 12:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:47.229 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:47.229 = sectsz=512 attr=2, projid32bit=1 00:06:47.229 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:47.229 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:47.229 data = bsize=4096 blocks=130560, imaxpct=25 00:06:47.229 = sunit=0 swidth=0 blks 00:06:47.229 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:47.229 log =internal log bsize=4096 blocks=16384, version=2 00:06:47.229 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:47.229 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:48.157 Discarding blocks...Done. 00:06:48.157 12:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:48.157 12:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 877831 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.677 00:06:50.677 real 0m3.362s 00:06:50.677 user 0m0.014s 00:06:50.677 sys 0m0.059s 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:50.677 ************************************ 00:06:50.677 END TEST filesystem_in_capsule_xfs 00:06:50.677 ************************************ 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:50.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 877831 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 877831 ']' 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 877831 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 877831 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 877831' 00:06:50.677 killing process with pid 877831 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 877831 00:06:50.677 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 877831 00:06:51.243 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:51.243 00:06:51.243 real 0m12.883s 00:06:51.243 user 0m49.619s 00:06:51.243 sys 0m1.730s 00:06:51.243 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.243 12:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.243 ************************************ 00:06:51.244 END TEST nvmf_filesystem_in_capsule 00:06:51.244 ************************************ 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:51.244 rmmod nvme_tcp 00:06:51.244 rmmod nvme_fabrics 00:06:51.244 rmmod nvme_keyring 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.244 12:01:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.777 12:02:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:53.777 00:06:53.777 real 0m28.765s 00:06:53.777 user 1m34.193s 00:06:53.777 sys 0m5.080s 00:06:53.777 12:02:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.777 12:02:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.777 ************************************ 00:06:53.777 END TEST nvmf_filesystem 00:06:53.777 ************************************ 00:06:53.777 12:02:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:53.777 12:02:01 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:53.777 12:02:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.777 12:02:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.777 12:02:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.777 ************************************ 00:06:53.777 START TEST nvmf_target_discovery 00:06:53.777 ************************************ 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:53.777 * Looking for test storage... 00:06:53.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.777 12:02:01 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.778 12:02:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:55.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:55.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:55.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:55.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.152 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:06:55.410 00:06:55.410 --- 10.0.0.2 ping statistics --- 00:06:55.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.410 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:06:55.410 00:06:55.410 --- 10.0.0.1 ping statistics --- 00:06:55.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.410 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=881438 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 881438 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 881438 ']' 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.410 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.410 [2024-07-22 12:02:03.271264] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:06:55.410 [2024-07-22 12:02:03.271361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.410 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.410 [2024-07-22 12:02:03.308981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.667 [2024-07-22 12:02:03.341170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.667 [2024-07-22 12:02:03.433002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.667 [2024-07-22 12:02:03.433069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.667 [2024-07-22 12:02:03.433096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.667 [2024-07-22 12:02:03.433110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.667 [2024-07-22 12:02:03.433122] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.667 [2024-07-22 12:02:03.433211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.667 [2024-07-22 12:02:03.433262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.667 [2024-07-22 12:02:03.433382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.667 [2024-07-22 12:02:03.433384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.667 [2024-07-22 12:02:03.583642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.667 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 Null1 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 [2024-07-22 12:02:03.623948] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.925 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 Null2 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 Null3 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 Null4 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.926 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:56.184 00:06:56.184 Discovery Log Number of Records 6, Generation counter 6 00:06:56.184 =====Discovery Log Entry 0====== 00:06:56.184 trtype: tcp 00:06:56.184 adrfam: ipv4 00:06:56.184 subtype: current discovery subsystem 00:06:56.184 treq: not required 00:06:56.184 portid: 0 00:06:56.184 trsvcid: 4420 00:06:56.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:56.184 traddr: 10.0.0.2 00:06:56.184 eflags: explicit discovery connections, duplicate discovery information 00:06:56.184 sectype: none 00:06:56.184 =====Discovery Log Entry 1====== 00:06:56.184 trtype: tcp 00:06:56.184 adrfam: ipv4 00:06:56.184 subtype: nvme subsystem 00:06:56.184 treq: not required 00:06:56.184 portid: 0 00:06:56.184 trsvcid: 4420 00:06:56.184 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:56.184 traddr: 10.0.0.2 00:06:56.184 eflags: none 00:06:56.184 sectype: none 00:06:56.185 =====Discovery Log Entry 2====== 00:06:56.185 trtype: tcp 00:06:56.185 adrfam: ipv4 00:06:56.185 subtype: nvme subsystem 00:06:56.185 treq: not required 00:06:56.185 portid: 0 00:06:56.185 trsvcid: 4420 00:06:56.185 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:56.185 traddr: 10.0.0.2 00:06:56.185 eflags: none 00:06:56.185 sectype: none 00:06:56.185 =====Discovery Log Entry 3====== 00:06:56.185 trtype: tcp 00:06:56.185 adrfam: ipv4 00:06:56.185 subtype: nvme subsystem 00:06:56.185 treq: not required 00:06:56.185 portid: 0 00:06:56.185 trsvcid: 4420 00:06:56.185 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:56.185 traddr: 10.0.0.2 00:06:56.185 eflags: none 00:06:56.185 sectype: none 00:06:56.185 =====Discovery Log Entry 4====== 00:06:56.185 trtype: tcp 00:06:56.185 adrfam: ipv4 00:06:56.185 subtype: nvme subsystem 00:06:56.185 treq: not required 00:06:56.185 portid: 0 00:06:56.185 trsvcid: 4420 00:06:56.185 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:56.185 traddr: 10.0.0.2 00:06:56.185 eflags: none 00:06:56.185 sectype: none 00:06:56.185 =====Discovery Log Entry 5====== 00:06:56.185 trtype: tcp 00:06:56.185 adrfam: ipv4 00:06:56.185 subtype: discovery subsystem referral 00:06:56.185 treq: not required 00:06:56.185 portid: 0 00:06:56.185 trsvcid: 4430 00:06:56.185 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:56.185 traddr: 10.0.0.2 00:06:56.185 eflags: none 00:06:56.185 sectype: none 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:56.185 Perform nvmf subsystem discovery via RPC 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 [ 00:06:56.185 { 00:06:56.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:56.185 "subtype": "Discovery", 00:06:56.185 "listen_addresses": [ 00:06:56.185 { 00:06:56.185 "trtype": "TCP", 00:06:56.185 "adrfam": "IPv4", 00:06:56.185 "traddr": "10.0.0.2", 00:06:56.185 "trsvcid": "4420" 00:06:56.185 } 00:06:56.185 ], 00:06:56.185 "allow_any_host": true, 00:06:56.185 "hosts": [] 00:06:56.185 }, 00:06:56.185 { 00:06:56.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:56.185 "subtype": "NVMe", 00:06:56.185 "listen_addresses": [ 00:06:56.185 { 00:06:56.185 "trtype": "TCP", 00:06:56.185 "adrfam": "IPv4", 00:06:56.185 "traddr": "10.0.0.2", 00:06:56.185 "trsvcid": "4420" 00:06:56.185 } 00:06:56.185 ], 00:06:56.185 "allow_any_host": true, 00:06:56.185 "hosts": [], 00:06:56.185 "serial_number": "SPDK00000000000001", 00:06:56.185 "model_number": "SPDK bdev Controller", 00:06:56.185 "max_namespaces": 32, 00:06:56.185 "min_cntlid": 1, 00:06:56.185 "max_cntlid": 65519, 00:06:56.185 "namespaces": [ 00:06:56.185 { 00:06:56.185 "nsid": 1, 00:06:56.185 "bdev_name": "Null1", 00:06:56.185 "name": "Null1", 00:06:56.185 "nguid": "EBB54606F4CF42F6A244AADA5B4DD1C5", 00:06:56.185 "uuid": "ebb54606-f4cf-42f6-a244-aada5b4dd1c5" 00:06:56.185 } 00:06:56.185 ] 00:06:56.185 }, 00:06:56.185 { 00:06:56.185 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:56.185 "subtype": "NVMe", 00:06:56.185 "listen_addresses": [ 00:06:56.185 { 00:06:56.185 "trtype": "TCP", 00:06:56.185 "adrfam": "IPv4", 00:06:56.185 "traddr": "10.0.0.2", 00:06:56.185 "trsvcid": "4420" 00:06:56.185 } 00:06:56.185 ], 00:06:56.185 "allow_any_host": true, 00:06:56.185 "hosts": [], 00:06:56.185 "serial_number": "SPDK00000000000002", 00:06:56.185 "model_number": "SPDK bdev Controller", 00:06:56.185 "max_namespaces": 32, 00:06:56.185 "min_cntlid": 1, 00:06:56.185 "max_cntlid": 65519, 00:06:56.185 "namespaces": [ 00:06:56.185 { 00:06:56.185 "nsid": 1, 00:06:56.185 "bdev_name": "Null2", 00:06:56.185 "name": "Null2", 00:06:56.185 "nguid": "23DD43C769E340F3806C695091A6887D", 00:06:56.185 "uuid": "23dd43c7-69e3-40f3-806c-695091a6887d" 00:06:56.185 } 00:06:56.185 ] 00:06:56.185 }, 00:06:56.185 { 00:06:56.185 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:56.185 "subtype": "NVMe", 00:06:56.185 "listen_addresses": [ 00:06:56.185 { 00:06:56.185 "trtype": "TCP", 00:06:56.185 "adrfam": "IPv4", 00:06:56.185 "traddr": "10.0.0.2", 00:06:56.185 "trsvcid": "4420" 00:06:56.185 } 00:06:56.185 ], 00:06:56.185 "allow_any_host": true, 00:06:56.185 "hosts": [], 00:06:56.185 "serial_number": "SPDK00000000000003", 00:06:56.185 "model_number": "SPDK bdev Controller", 00:06:56.185 "max_namespaces": 32, 00:06:56.185 "min_cntlid": 1, 00:06:56.185 "max_cntlid": 65519, 00:06:56.185 "namespaces": [ 00:06:56.185 { 00:06:56.185 "nsid": 1, 00:06:56.185 "bdev_name": "Null3", 00:06:56.185 "name": "Null3", 00:06:56.185 "nguid": "A4C8FA4C7C424EC8BA0E20CBFB3B88C4", 00:06:56.185 "uuid": "a4c8fa4c-7c42-4ec8-ba0e-20cbfb3b88c4" 00:06:56.185 } 00:06:56.185 ] 00:06:56.185 }, 00:06:56.185 { 00:06:56.185 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:56.185 "subtype": "NVMe", 00:06:56.185 "listen_addresses": [ 00:06:56.185 { 00:06:56.185 "trtype": "TCP", 00:06:56.185 "adrfam": "IPv4", 00:06:56.185 "traddr": "10.0.0.2", 00:06:56.185 "trsvcid": "4420" 00:06:56.185 } 00:06:56.185 ], 00:06:56.185 "allow_any_host": true, 00:06:56.185 "hosts": [], 00:06:56.185 "serial_number": "SPDK00000000000004", 00:06:56.185 "model_number": "SPDK bdev Controller", 00:06:56.185 "max_namespaces": 32, 00:06:56.185 "min_cntlid": 1, 00:06:56.185 "max_cntlid": 65519, 00:06:56.185 "namespaces": [ 00:06:56.185 { 00:06:56.185 "nsid": 1, 00:06:56.185 "bdev_name": "Null4", 00:06:56.185 "name": "Null4", 00:06:56.185 "nguid": "C2F3A94C19D948E5B81594F1AD9090A6", 00:06:56.185 "uuid": "c2f3a94c-19d9-48e5-b815-94f1ad9090a6" 00:06:56.185 } 00:06:56.185 ] 00:06:56.185 } 00:06:56.185 ] 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.185 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:56.186 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:56.186 rmmod nvme_tcp 00:06:56.186 rmmod nvme_fabrics 00:06:56.186 rmmod nvme_keyring 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 881438 ']' 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 881438 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 881438 ']' 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 881438 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881438 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881438' 00:06:56.443 killing process with pid 881438 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 881438 00:06:56.443 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 881438 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.700 12:02:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.601 12:02:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.601 00:06:58.601 real 0m5.283s 00:06:58.601 user 0m4.496s 00:06:58.601 sys 0m1.723s 00:06:58.601 12:02:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.601 12:02:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.601 ************************************ 00:06:58.601 END TEST nvmf_target_discovery 00:06:58.601 ************************************ 00:06:58.601 12:02:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.601 12:02:06 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:58.601 12:02:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.601 12:02:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.601 12:02:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.601 ************************************ 00:06:58.601 START TEST nvmf_referrals 00:06:58.601 ************************************ 00:06:58.601 12:02:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:58.859 * Looking for test storage... 00:06:58.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.859 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.860 12:02:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.860 12:02:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.860 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.860 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.860 12:02:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.860 12:02:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.757 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:00.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:00.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:00.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:00.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:07:00.758 00:07:00.758 --- 10.0.0.2 ping statistics --- 00:07:00.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.758 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:07:00.758 00:07:00.758 --- 10.0.0.1 ping statistics --- 00:07:00.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.758 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=883528 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 883528 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 883528 ']' 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.758 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.031 [2024-07-22 12:02:08.720110] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:07:01.031 [2024-07-22 12:02:08.720191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.031 [2024-07-22 12:02:08.760298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.031 [2024-07-22 12:02:08.786652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.031 [2024-07-22 12:02:08.875838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.031 [2024-07-22 12:02:08.875898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.031 [2024-07-22 12:02:08.875912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.031 [2024-07-22 12:02:08.875923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.031 [2024-07-22 12:02:08.875934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.031 [2024-07-22 12:02:08.875987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.031 [2024-07-22 12:02:08.876045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.031 [2024-07-22 12:02:08.876111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.031 [2024-07-22 12:02:08.876113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.288 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.288 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:01.288 12:02:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.288 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.288 12:02:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 [2024-07-22 12:02:09.028546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 [2024-07-22 12:02:09.040797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:01.288 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.289 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.289 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.289 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.289 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.545 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.851 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:02.108 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:02.108 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:02.108 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:02.108 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:02.108 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.108 12:02:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:02.108 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.365 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.623 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.881 rmmod nvme_tcp 00:07:02.881 rmmod nvme_fabrics 00:07:02.881 rmmod nvme_keyring 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 883528 ']' 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 883528 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 883528 ']' 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 883528 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883528 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883528' 00:07:02.881 killing process with pid 883528 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 883528 00:07:02.881 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 883528 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.140 12:02:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.038 12:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.038 00:07:05.038 real 0m6.467s 00:07:05.038 user 0m9.439s 00:07:05.038 sys 0m2.076s 00:07:05.038 12:02:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.038 12:02:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.038 ************************************ 00:07:05.038 END TEST nvmf_referrals 00:07:05.038 ************************************ 00:07:05.295 12:02:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:05.295 12:02:12 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:05.295 12:02:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.295 12:02:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.295 12:02:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.295 ************************************ 00:07:05.295 START TEST nvmf_connect_disconnect 00:07:05.295 ************************************ 00:07:05.295 12:02:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:05.295 * Looking for test storage... 00:07:05.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.295 12:02:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.192 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.192 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.192 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.192 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.192 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.193 12:02:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:07:07.193 00:07:07.193 --- 10.0.0.2 ping statistics --- 00:07:07.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.193 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:07.193 00:07:07.193 --- 10.0.0.1 ping statistics --- 00:07:07.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.193 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=885708 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 885708 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 885708 ']' 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.193 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.450 [2024-07-22 12:02:15.146013] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:07:07.450 [2024-07-22 12:02:15.146111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.450 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.450 [2024-07-22 12:02:15.183057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.450 [2024-07-22 12:02:15.215088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.450 [2024-07-22 12:02:15.308846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.450 [2024-07-22 12:02:15.308923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.450 [2024-07-22 12:02:15.308941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.450 [2024-07-22 12:02:15.308955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.450 [2024-07-22 12:02:15.308966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.450 [2024-07-22 12:02:15.309024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.450 [2024-07-22 12:02:15.309077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.450 [2024-07-22 12:02:15.309190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.450 [2024-07-22 12:02:15.309192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.707 [2024-07-22 12:02:15.450230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:07.707 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.708 [2024-07-22 12:02:15.501335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:07.708 12:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:10.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.401 rmmod nvme_tcp 00:10:59.401 rmmod nvme_fabrics 00:10:59.401 rmmod nvme_keyring 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 885708 ']' 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 885708 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 885708 ']' 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 885708 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.401 12:06:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885708 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885708' 00:10:59.401 killing process with pid 885708 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 885708 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 885708 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.401 12:06:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.930 12:06:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.930 00:11:01.930 real 3m56.306s 00:11:01.930 user 15m0.021s 00:11:01.930 sys 0m35.022s 00:11:01.930 12:06:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.930 12:06:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.930 ************************************ 00:11:01.930 END TEST nvmf_connect_disconnect 00:11:01.930 ************************************ 00:11:01.930 12:06:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:01.930 12:06:09 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:01.930 12:06:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.930 12:06:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.930 12:06:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.930 ************************************ 00:11:01.930 START TEST nvmf_multitarget 00:11:01.930 ************************************ 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:01.930 * Looking for test storage... 00:11:01.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:01.930 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.931 12:06:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.828 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:03.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:03.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:03.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:03.829 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:11:03.829 00:11:03.829 --- 10.0.0.2 ping statistics --- 00:11:03.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.829 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:11:03.829 00:11:03.829 --- 10.0.0.1 ping statistics --- 00:11:03.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.829 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=917504 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 917504 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 917504 ']' 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.829 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:03.829 [2024-07-22 12:06:11.647267] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:11:03.829 [2024-07-22 12:06:11.647360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.829 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.829 [2024-07-22 12:06:11.694828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:03.829 [2024-07-22 12:06:11.726277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.087 [2024-07-22 12:06:11.819533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.087 [2024-07-22 12:06:11.819588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.087 [2024-07-22 12:06:11.819606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.087 [2024-07-22 12:06:11.819629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.087 [2024-07-22 12:06:11.819643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.087 [2024-07-22 12:06:11.819697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.087 [2024-07-22 12:06:11.819752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.087 [2024-07-22 12:06:11.819869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.087 [2024-07-22 12:06:11.819871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:04.087 12:06:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:04.343 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:04.343 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:04.343 "nvmf_tgt_1" 00:11:04.343 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:04.343 "nvmf_tgt_2" 00:11:04.623 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:04.623 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:04.623 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:04.623 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:04.623 true 00:11:04.623 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:04.890 true 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.890 rmmod nvme_tcp 00:11:04.890 rmmod nvme_fabrics 00:11:04.890 rmmod nvme_keyring 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 917504 ']' 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 917504 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 917504 ']' 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 917504 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.890 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 917504 00:11:05.148 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:05.148 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:05.148 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 917504' 00:11:05.148 killing process with pid 917504 00:11:05.148 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 917504 00:11:05.148 12:06:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 917504 00:11:05.148 12:06:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.149 12:06:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.690 12:06:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.690 00:11:07.690 real 0m5.737s 00:11:07.690 user 0m6.372s 00:11:07.690 sys 0m1.903s 00:11:07.690 12:06:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.690 12:06:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:07.690 ************************************ 00:11:07.690 END TEST nvmf_multitarget 00:11:07.690 ************************************ 00:11:07.690 12:06:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.690 12:06:15 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:07.690 12:06:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.690 12:06:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.690 12:06:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.690 ************************************ 00:11:07.690 START TEST nvmf_rpc 00:11:07.690 ************************************ 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:07.690 * Looking for test storage... 00:11:07.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.690 12:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:09.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:09.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:09.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:09.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:11:09.587 00:11:09.587 --- 10.0.0.2 ping statistics --- 00:11:09.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.587 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:11:09.587 00:11:09.587 --- 10.0.0.1 ping statistics --- 00:11:09.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.587 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.587 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=919610 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 919610 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 919610 ']' 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.588 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.588 [2024-07-22 12:06:17.439165] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:11:09.588 [2024-07-22 12:06:17.439244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.588 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.588 [2024-07-22 12:06:17.480478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:09.588 [2024-07-22 12:06:17.510953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.845 [2024-07-22 12:06:17.604116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.845 [2024-07-22 12:06:17.604176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.845 [2024-07-22 12:06:17.604191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.845 [2024-07-22 12:06:17.604205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.845 [2024-07-22 12:06:17.604216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.845 [2024-07-22 12:06:17.604309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.845 [2024-07-22 12:06:17.604362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.845 [2024-07-22 12:06:17.604477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.845 [2024-07-22 12:06:17.604479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:09.845 "tick_rate": 2700000000, 00:11:09.845 "poll_groups": [ 00:11:09.845 { 00:11:09.845 "name": "nvmf_tgt_poll_group_000", 00:11:09.845 "admin_qpairs": 0, 00:11:09.845 "io_qpairs": 0, 00:11:09.845 "current_admin_qpairs": 0, 00:11:09.845 "current_io_qpairs": 0, 00:11:09.845 "pending_bdev_io": 0, 00:11:09.845 "completed_nvme_io": 0, 00:11:09.845 "transports": [] 00:11:09.845 }, 00:11:09.845 { 00:11:09.845 "name": "nvmf_tgt_poll_group_001", 00:11:09.845 "admin_qpairs": 0, 00:11:09.845 "io_qpairs": 0, 00:11:09.845 "current_admin_qpairs": 0, 00:11:09.845 "current_io_qpairs": 0, 00:11:09.845 "pending_bdev_io": 0, 00:11:09.845 "completed_nvme_io": 0, 00:11:09.845 "transports": [] 00:11:09.845 }, 00:11:09.845 { 00:11:09.845 "name": "nvmf_tgt_poll_group_002", 00:11:09.845 "admin_qpairs": 0, 00:11:09.845 "io_qpairs": 0, 00:11:09.845 "current_admin_qpairs": 0, 00:11:09.845 "current_io_qpairs": 0, 00:11:09.845 "pending_bdev_io": 0, 00:11:09.845 "completed_nvme_io": 0, 00:11:09.845 "transports": [] 00:11:09.845 }, 00:11:09.845 { 00:11:09.845 "name": "nvmf_tgt_poll_group_003", 00:11:09.845 "admin_qpairs": 0, 00:11:09.845 "io_qpairs": 0, 00:11:09.845 "current_admin_qpairs": 0, 00:11:09.845 "current_io_qpairs": 0, 00:11:09.845 "pending_bdev_io": 0, 00:11:09.845 "completed_nvme_io": 0, 00:11:09.845 "transports": [] 00:11:09.845 } 00:11:09.845 ] 00:11:09.845 }' 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:09.845 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 [2024-07-22 12:06:17.857931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:10.103 "tick_rate": 2700000000, 00:11:10.103 "poll_groups": [ 00:11:10.103 { 00:11:10.103 "name": "nvmf_tgt_poll_group_000", 00:11:10.103 "admin_qpairs": 0, 00:11:10.103 "io_qpairs": 0, 00:11:10.103 "current_admin_qpairs": 0, 00:11:10.103 "current_io_qpairs": 0, 00:11:10.103 "pending_bdev_io": 0, 00:11:10.103 "completed_nvme_io": 0, 00:11:10.103 "transports": [ 00:11:10.103 { 00:11:10.103 "trtype": "TCP" 00:11:10.103 } 00:11:10.103 ] 00:11:10.103 }, 00:11:10.103 { 00:11:10.103 "name": "nvmf_tgt_poll_group_001", 00:11:10.103 "admin_qpairs": 0, 00:11:10.103 "io_qpairs": 0, 00:11:10.103 "current_admin_qpairs": 0, 00:11:10.103 "current_io_qpairs": 0, 00:11:10.103 "pending_bdev_io": 0, 00:11:10.103 "completed_nvme_io": 0, 00:11:10.103 "transports": [ 00:11:10.103 { 00:11:10.103 "trtype": "TCP" 00:11:10.103 } 00:11:10.103 ] 00:11:10.103 }, 00:11:10.103 { 00:11:10.103 "name": "nvmf_tgt_poll_group_002", 00:11:10.103 "admin_qpairs": 0, 00:11:10.103 "io_qpairs": 0, 00:11:10.103 "current_admin_qpairs": 0, 00:11:10.103 "current_io_qpairs": 0, 00:11:10.103 "pending_bdev_io": 0, 00:11:10.103 "completed_nvme_io": 0, 00:11:10.103 "transports": [ 00:11:10.103 { 00:11:10.103 "trtype": "TCP" 00:11:10.103 } 00:11:10.103 ] 00:11:10.103 }, 00:11:10.103 { 00:11:10.103 "name": "nvmf_tgt_poll_group_003", 00:11:10.103 "admin_qpairs": 0, 00:11:10.103 "io_qpairs": 0, 00:11:10.103 "current_admin_qpairs": 0, 00:11:10.103 "current_io_qpairs": 0, 00:11:10.103 "pending_bdev_io": 0, 00:11:10.103 "completed_nvme_io": 0, 00:11:10.103 "transports": [ 00:11:10.103 { 00:11:10.103 "trtype": "TCP" 00:11:10.103 } 00:11:10.103 ] 00:11:10.103 } 00:11:10.103 ] 00:11:10.103 }' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 Malloc1 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.103 [2024-07-22 12:06:18.019702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:10.103 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:10.360 [2024-07-22 12:06:18.042223] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:10.360 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:10.360 could not add new controller: failed to write to nvme-fabrics device 00:11:10.360 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:10.360 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:10.360 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:10.360 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:10.360 12:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.360 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.361 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.361 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.361 12:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.924 12:06:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.924 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.924 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.924 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:10.924 12:06:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:12.820 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:13.078 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.079 [2024-07-22 12:06:20.781499] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:13.079 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:13.079 could not add new controller: failed to write to nvme-fabrics device 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.079 12:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.642 12:06:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.643 12:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.643 12:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.643 12:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:13.643 12:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:15.536 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.793 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.793 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.793 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.793 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 [2024-07-22 12:06:23.562651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.794 12:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.357 12:06:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.357 12:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.357 12:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.357 12:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:16.357 12:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:18.262 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 [2024-07-22 12:06:26.294585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.520 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.084 12:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.084 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:19.084 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.084 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:19.084 12:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:20.978 12:06:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.236 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.237 [2024-07-22 12:06:29.051583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.237 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.869 12:06:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.869 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.869 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.869 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:21.869 12:06:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:23.765 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.023 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.024 [2024-07-22 12:06:31.786082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.024 12:06:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.589 12:06:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.589 12:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.589 12:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.589 12:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:24.589 12:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 [2024-07-22 12:06:34.610052] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.116 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:27.117 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.117 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.117 12:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.117 12:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.375 12:06:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.375 12:06:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:27.375 12:06:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.375 12:06:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:27.375 12:06:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 [2024-07-22 12:06:37.381566] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 [2024-07-22 12:06:37.429645] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 [2024-07-22 12:06:37.477804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 [2024-07-22 12:06:37.525996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 [2024-07-22 12:06:37.574151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:29.900 "tick_rate": 2700000000, 00:11:29.900 "poll_groups": [ 00:11:29.900 { 00:11:29.900 "name": "nvmf_tgt_poll_group_000", 00:11:29.900 "admin_qpairs": 2, 00:11:29.900 "io_qpairs": 84, 00:11:29.900 "current_admin_qpairs": 0, 00:11:29.900 "current_io_qpairs": 0, 00:11:29.900 "pending_bdev_io": 0, 00:11:29.900 "completed_nvme_io": 134, 00:11:29.900 "transports": [ 00:11:29.900 { 00:11:29.900 "trtype": "TCP" 00:11:29.900 } 00:11:29.900 ] 00:11:29.900 }, 00:11:29.900 { 00:11:29.900 "name": "nvmf_tgt_poll_group_001", 00:11:29.900 "admin_qpairs": 2, 00:11:29.900 "io_qpairs": 84, 00:11:29.900 "current_admin_qpairs": 0, 00:11:29.900 "current_io_qpairs": 0, 00:11:29.900 "pending_bdev_io": 0, 00:11:29.900 "completed_nvme_io": 185, 00:11:29.900 "transports": [ 00:11:29.900 { 00:11:29.900 "trtype": "TCP" 00:11:29.900 } 00:11:29.900 ] 00:11:29.900 }, 00:11:29.900 { 00:11:29.900 "name": "nvmf_tgt_poll_group_002", 00:11:29.900 "admin_qpairs": 1, 00:11:29.900 "io_qpairs": 84, 00:11:29.900 "current_admin_qpairs": 0, 00:11:29.900 "current_io_qpairs": 0, 00:11:29.900 "pending_bdev_io": 0, 00:11:29.900 "completed_nvme_io": 184, 00:11:29.900 "transports": [ 00:11:29.900 { 00:11:29.900 "trtype": "TCP" 00:11:29.900 } 00:11:29.900 ] 00:11:29.900 }, 00:11:29.900 { 00:11:29.900 "name": "nvmf_tgt_poll_group_003", 00:11:29.900 "admin_qpairs": 2, 00:11:29.900 "io_qpairs": 84, 00:11:29.900 "current_admin_qpairs": 0, 00:11:29.900 "current_io_qpairs": 0, 00:11:29.900 "pending_bdev_io": 0, 00:11:29.900 "completed_nvme_io": 183, 00:11:29.900 "transports": [ 00:11:29.900 { 00:11:29.900 "trtype": "TCP" 00:11:29.900 } 00:11:29.900 ] 00:11:29.900 } 00:11:29.900 ] 00:11:29.900 }' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.900 rmmod nvme_tcp 00:11:29.900 rmmod nvme_fabrics 00:11:29.900 rmmod nvme_keyring 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 919610 ']' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 919610 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 919610 ']' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 919610 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 919610 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 919610' 00:11:29.900 killing process with pid 919610 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 919610 00:11:29.900 12:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 919610 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.159 12:06:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.690 12:06:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.690 00:11:32.690 real 0m24.949s 00:11:32.690 user 1m20.993s 00:11:32.690 sys 0m4.025s 00:11:32.690 12:06:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.690 12:06:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.690 ************************************ 00:11:32.690 END TEST nvmf_rpc 00:11:32.690 ************************************ 00:11:32.690 12:06:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:32.690 12:06:40 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:32.690 12:06:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.690 12:06:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.690 12:06:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.690 ************************************ 00:11:32.690 START TEST nvmf_invalid 00:11:32.690 ************************************ 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:32.690 * Looking for test storage... 00:11:32.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.690 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.691 12:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:34.592 00:11:34.592 --- 10.0.0.2 ping statistics --- 00:11:34.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.592 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:11:34.592 00:11:34.592 --- 10.0.0.1 ping statistics --- 00:11:34.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.592 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=924095 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 924095 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 924095 ']' 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.592 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.593 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.593 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.593 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.593 [2024-07-22 12:06:42.333478] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:11:34.593 [2024-07-22 12:06:42.333572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.593 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.593 [2024-07-22 12:06:42.370988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:34.593 [2024-07-22 12:06:42.402863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.593 [2024-07-22 12:06:42.493902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.593 [2024-07-22 12:06:42.493973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.593 [2024-07-22 12:06:42.493990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.593 [2024-07-22 12:06:42.494004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.593 [2024-07-22 12:06:42.494015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.593 [2024-07-22 12:06:42.494098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.593 [2024-07-22 12:06:42.494156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.593 [2024-07-22 12:06:42.494277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.593 [2024-07-22 12:06:42.494280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:34.851 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22503 00:11:35.109 [2024-07-22 12:06:42.889208] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:35.109 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:35.109 { 00:11:35.109 "nqn": "nqn.2016-06.io.spdk:cnode22503", 00:11:35.109 "tgt_name": "foobar", 00:11:35.109 "method": "nvmf_create_subsystem", 00:11:35.109 "req_id": 1 00:11:35.109 } 00:11:35.109 Got JSON-RPC error response 00:11:35.109 response: 00:11:35.109 { 00:11:35.109 "code": -32603, 00:11:35.109 "message": "Unable to find target foobar" 00:11:35.109 }' 00:11:35.109 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:35.109 { 00:11:35.109 "nqn": "nqn.2016-06.io.spdk:cnode22503", 00:11:35.109 "tgt_name": "foobar", 00:11:35.109 "method": "nvmf_create_subsystem", 00:11:35.109 "req_id": 1 00:11:35.109 } 00:11:35.109 Got JSON-RPC error response 00:11:35.109 response: 00:11:35.109 { 00:11:35.109 "code": -32603, 00:11:35.109 "message": "Unable to find target foobar" 00:11:35.109 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:35.109 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:35.109 12:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18282 00:11:35.366 [2024-07-22 12:06:43.142055] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18282: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:35.366 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:35.366 { 00:11:35.366 "nqn": "nqn.2016-06.io.spdk:cnode18282", 00:11:35.366 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:35.366 "method": "nvmf_create_subsystem", 00:11:35.366 "req_id": 1 00:11:35.366 } 00:11:35.366 Got JSON-RPC error response 00:11:35.366 response: 00:11:35.366 { 00:11:35.366 "code": -32602, 00:11:35.366 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:35.366 }' 00:11:35.366 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:35.366 { 00:11:35.366 "nqn": "nqn.2016-06.io.spdk:cnode18282", 00:11:35.366 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:35.366 "method": "nvmf_create_subsystem", 00:11:35.366 "req_id": 1 00:11:35.366 } 00:11:35.366 Got JSON-RPC error response 00:11:35.366 response: 00:11:35.366 { 00:11:35.366 "code": -32602, 00:11:35.366 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:35.366 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:35.366 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:35.366 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19552 00:11:35.623 [2024-07-22 12:06:43.410972] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19552: invalid model number 'SPDK_Controller' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:35.623 { 00:11:35.623 "nqn": "nqn.2016-06.io.spdk:cnode19552", 00:11:35.623 "model_number": "SPDK_Controller\u001f", 00:11:35.623 "method": "nvmf_create_subsystem", 00:11:35.623 "req_id": 1 00:11:35.623 } 00:11:35.623 Got JSON-RPC error response 00:11:35.623 response: 00:11:35.623 { 00:11:35.623 "code": -32602, 00:11:35.623 "message": "Invalid MN SPDK_Controller\u001f" 00:11:35.623 }' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:35.623 { 00:11:35.623 "nqn": "nqn.2016-06.io.spdk:cnode19552", 00:11:35.623 "model_number": "SPDK_Controller\u001f", 00:11:35.623 "method": "nvmf_create_subsystem", 00:11:35.623 "req_id": 1 00:11:35.623 } 00:11:35.623 Got JSON-RPC error response 00:11:35.623 response: 00:11:35.623 { 00:11:35.623 "code": -32602, 00:11:35.623 "message": "Invalid MN SPDK_Controller\u001f" 00:11:35.623 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:35.623 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bOeZc)Ec#l2Hb*+Xk[#Gx' 00:11:35.624 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bOeZc)Ec#l2Hb*+Xk[#Gx' nqn.2016-06.io.spdk:cnode25169 00:11:35.882 [2024-07-22 12:06:43.703973] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25169: invalid serial number 'bOeZc)Ec#l2Hb*+Xk[#Gx' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:35.882 { 00:11:35.882 "nqn": "nqn.2016-06.io.spdk:cnode25169", 00:11:35.882 "serial_number": "bOeZc)Ec#l2Hb*+Xk[#Gx", 00:11:35.882 "method": "nvmf_create_subsystem", 00:11:35.882 "req_id": 1 00:11:35.882 } 00:11:35.882 Got JSON-RPC error response 00:11:35.882 response: 00:11:35.882 { 00:11:35.882 "code": -32602, 00:11:35.882 "message": "Invalid SN bOeZc)Ec#l2Hb*+Xk[#Gx" 00:11:35.882 }' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:35.882 { 00:11:35.882 "nqn": "nqn.2016-06.io.spdk:cnode25169", 00:11:35.882 "serial_number": "bOeZc)Ec#l2Hb*+Xk[#Gx", 00:11:35.882 "method": "nvmf_create_subsystem", 00:11:35.882 "req_id": 1 00:11:35.882 } 00:11:35.882 Got JSON-RPC error response 00:11:35.882 response: 00:11:35.882 { 00:11:35.882 "code": -32602, 00:11:35.882 "message": "Invalid SN bOeZc)Ec#l2Hb*+Xk[#Gx" 00:11:35.882 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:35.882 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.883 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:36.140 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc' 00:11:36.141 12:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc' nqn.2016-06.io.spdk:cnode10260 00:11:36.398 [2024-07-22 12:06:44.093225] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10260: invalid model number '2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc' 00:11:36.398 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:36.398 { 00:11:36.398 "nqn": "nqn.2016-06.io.spdk:cnode10260", 00:11:36.398 "model_number": "2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc", 00:11:36.398 "method": "nvmf_create_subsystem", 00:11:36.398 "req_id": 1 00:11:36.398 } 00:11:36.398 Got JSON-RPC error response 00:11:36.398 response: 00:11:36.398 { 00:11:36.398 "code": -32602, 00:11:36.398 "message": "Invalid MN 2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc" 00:11:36.398 }' 00:11:36.398 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:36.398 { 00:11:36.398 "nqn": "nqn.2016-06.io.spdk:cnode10260", 00:11:36.398 "model_number": "2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc", 00:11:36.398 "method": "nvmf_create_subsystem", 00:11:36.398 "req_id": 1 00:11:36.398 } 00:11:36.398 Got JSON-RPC error response 00:11:36.398 response: 00:11:36.398 { 00:11:36.398 "code": -32602, 00:11:36.398 "message": "Invalid MN 2=A@T)P (9ZE{}JW136-)>W#yj(:}|2$@}6=Y5Fzc" 00:11:36.398 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:36.398 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:36.655 [2024-07-22 12:06:44.338162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.655 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:36.913 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:36.913 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:36.913 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:36.913 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:36.913 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:37.171 [2024-07-22 12:06:44.847887] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:37.171 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:37.171 { 00:11:37.171 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:37.171 "listen_address": { 00:11:37.171 "trtype": "tcp", 00:11:37.171 "traddr": "", 00:11:37.171 "trsvcid": "4421" 00:11:37.171 }, 00:11:37.171 "method": "nvmf_subsystem_remove_listener", 00:11:37.171 "req_id": 1 00:11:37.171 } 00:11:37.171 Got JSON-RPC error response 00:11:37.171 response: 00:11:37.171 { 00:11:37.171 "code": -32602, 00:11:37.171 "message": "Invalid parameters" 00:11:37.171 }' 00:11:37.171 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:37.171 { 00:11:37.171 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:37.171 "listen_address": { 00:11:37.171 "trtype": "tcp", 00:11:37.171 "traddr": "", 00:11:37.171 "trsvcid": "4421" 00:11:37.171 }, 00:11:37.171 "method": "nvmf_subsystem_remove_listener", 00:11:37.171 "req_id": 1 00:11:37.171 } 00:11:37.171 Got JSON-RPC error response 00:11:37.171 response: 00:11:37.171 { 00:11:37.171 "code": -32602, 00:11:37.171 "message": "Invalid parameters" 00:11:37.171 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:37.171 12:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7037 -i 0 00:11:37.171 [2024-07-22 12:06:45.092700] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7037: invalid cntlid range [0-65519] 00:11:37.428 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:37.428 { 00:11:37.428 "nqn": "nqn.2016-06.io.spdk:cnode7037", 00:11:37.428 "min_cntlid": 0, 00:11:37.428 "method": "nvmf_create_subsystem", 00:11:37.428 "req_id": 1 00:11:37.428 } 00:11:37.428 Got JSON-RPC error response 00:11:37.428 response: 00:11:37.428 { 00:11:37.428 "code": -32602, 00:11:37.428 "message": "Invalid cntlid range [0-65519]" 00:11:37.428 }' 00:11:37.428 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:37.428 { 00:11:37.428 "nqn": "nqn.2016-06.io.spdk:cnode7037", 00:11:37.428 "min_cntlid": 0, 00:11:37.428 "method": "nvmf_create_subsystem", 00:11:37.428 "req_id": 1 00:11:37.428 } 00:11:37.428 Got JSON-RPC error response 00:11:37.428 response: 00:11:37.428 { 00:11:37.428 "code": -32602, 00:11:37.428 "message": "Invalid cntlid range [0-65519]" 00:11:37.428 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.428 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10151 -i 65520 00:11:37.428 [2024-07-22 12:06:45.333470] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10151: invalid cntlid range [65520-65519] 00:11:37.428 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:37.428 { 00:11:37.428 "nqn": "nqn.2016-06.io.spdk:cnode10151", 00:11:37.428 "min_cntlid": 65520, 00:11:37.428 "method": "nvmf_create_subsystem", 00:11:37.428 "req_id": 1 00:11:37.428 } 00:11:37.428 Got JSON-RPC error response 00:11:37.428 response: 00:11:37.428 { 00:11:37.428 "code": -32602, 00:11:37.428 "message": "Invalid cntlid range [65520-65519]" 00:11:37.428 }' 00:11:37.428 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:37.428 { 00:11:37.428 "nqn": "nqn.2016-06.io.spdk:cnode10151", 00:11:37.428 "min_cntlid": 65520, 00:11:37.428 "method": "nvmf_create_subsystem", 00:11:37.428 "req_id": 1 00:11:37.428 } 00:11:37.428 Got JSON-RPC error response 00:11:37.428 response: 00:11:37.428 { 00:11:37.428 "code": -32602, 00:11:37.428 "message": "Invalid cntlid range [65520-65519]" 00:11:37.428 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.428 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15471 -I 0 00:11:37.685 [2024-07-22 12:06:45.586350] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15471: invalid cntlid range [1-0] 00:11:37.685 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:37.685 { 00:11:37.685 "nqn": "nqn.2016-06.io.spdk:cnode15471", 00:11:37.685 "max_cntlid": 0, 00:11:37.685 "method": "nvmf_create_subsystem", 00:11:37.685 "req_id": 1 00:11:37.685 } 00:11:37.685 Got JSON-RPC error response 00:11:37.685 response: 00:11:37.685 { 00:11:37.685 "code": -32602, 00:11:37.685 "message": "Invalid cntlid range [1-0]" 00:11:37.685 }' 00:11:37.685 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:37.685 { 00:11:37.685 "nqn": "nqn.2016-06.io.spdk:cnode15471", 00:11:37.685 "max_cntlid": 0, 00:11:37.685 "method": "nvmf_create_subsystem", 00:11:37.685 "req_id": 1 00:11:37.685 } 00:11:37.685 Got JSON-RPC error response 00:11:37.685 response: 00:11:37.685 { 00:11:37.685 "code": -32602, 00:11:37.685 "message": "Invalid cntlid range [1-0]" 00:11:37.685 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.685 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19620 -I 65520 00:11:37.943 [2024-07-22 12:06:45.827124] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19620: invalid cntlid range [1-65520] 00:11:37.943 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:37.943 { 00:11:37.943 "nqn": "nqn.2016-06.io.spdk:cnode19620", 00:11:37.943 "max_cntlid": 65520, 00:11:37.943 "method": "nvmf_create_subsystem", 00:11:37.943 "req_id": 1 00:11:37.943 } 00:11:37.943 Got JSON-RPC error response 00:11:37.943 response: 00:11:37.943 { 00:11:37.943 "code": -32602, 00:11:37.943 "message": "Invalid cntlid range [1-65520]" 00:11:37.943 }' 00:11:37.943 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:37.943 { 00:11:37.943 "nqn": "nqn.2016-06.io.spdk:cnode19620", 00:11:37.943 "max_cntlid": 65520, 00:11:37.943 "method": "nvmf_create_subsystem", 00:11:37.943 "req_id": 1 00:11:37.943 } 00:11:37.943 Got JSON-RPC error response 00:11:37.943 response: 00:11:37.943 { 00:11:37.943 "code": -32602, 00:11:37.943 "message": "Invalid cntlid range [1-65520]" 00:11:37.943 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.943 12:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14563 -i 6 -I 5 00:11:38.202 [2024-07-22 12:06:46.075980] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14563: invalid cntlid range [6-5] 00:11:38.202 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:38.202 { 00:11:38.202 "nqn": "nqn.2016-06.io.spdk:cnode14563", 00:11:38.202 "min_cntlid": 6, 00:11:38.202 "max_cntlid": 5, 00:11:38.202 "method": "nvmf_create_subsystem", 00:11:38.202 "req_id": 1 00:11:38.202 } 00:11:38.202 Got JSON-RPC error response 00:11:38.202 response: 00:11:38.202 { 00:11:38.202 "code": -32602, 00:11:38.202 "message": "Invalid cntlid range [6-5]" 00:11:38.202 }' 00:11:38.202 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:38.202 { 00:11:38.202 "nqn": "nqn.2016-06.io.spdk:cnode14563", 00:11:38.202 "min_cntlid": 6, 00:11:38.202 "max_cntlid": 5, 00:11:38.202 "method": "nvmf_create_subsystem", 00:11:38.202 "req_id": 1 00:11:38.202 } 00:11:38.202 Got JSON-RPC error response 00:11:38.202 response: 00:11:38.202 { 00:11:38.202 "code": -32602, 00:11:38.202 "message": "Invalid cntlid range [6-5]" 00:11:38.202 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:38.202 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:38.499 { 00:11:38.499 "name": "foobar", 00:11:38.499 "method": "nvmf_delete_target", 00:11:38.499 "req_id": 1 00:11:38.499 } 00:11:38.499 Got JSON-RPC error response 00:11:38.499 response: 00:11:38.499 { 00:11:38.499 "code": -32602, 00:11:38.499 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:38.499 }' 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:38.499 { 00:11:38.499 "name": "foobar", 00:11:38.499 "method": "nvmf_delete_target", 00:11:38.499 "req_id": 1 00:11:38.499 } 00:11:38.499 Got JSON-RPC error response 00:11:38.499 response: 00:11:38.499 { 00:11:38.499 "code": -32602, 00:11:38.499 "message": "The specified target doesn't exist, cannot delete it." 00:11:38.499 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.499 rmmod nvme_tcp 00:11:38.499 rmmod nvme_fabrics 00:11:38.499 rmmod nvme_keyring 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 924095 ']' 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 924095 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 924095 ']' 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 924095 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 924095 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 924095' 00:11:38.499 killing process with pid 924095 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 924095 00:11:38.499 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 924095 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.758 12:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.658 12:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:40.658 00:11:40.658 real 0m8.425s 00:11:40.658 user 0m19.549s 00:11:40.658 sys 0m2.329s 00:11:40.658 12:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.658 12:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:40.658 ************************************ 00:11:40.658 END TEST nvmf_invalid 00:11:40.658 ************************************ 00:11:40.658 12:06:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:40.658 12:06:48 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:40.658 12:06:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:40.658 12:06:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.658 12:06:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:40.916 ************************************ 00:11:40.916 START TEST nvmf_abort 00:11:40.916 ************************************ 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:40.916 * Looking for test storage... 00:11:40.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.916 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:11:40.917 12:06:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:42.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:42.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:42.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:42.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:42.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:11:42.816 00:11:42.816 --- 10.0.0.2 ping statistics --- 00:11:42.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.816 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:11:42.816 00:11:42.816 --- 10.0.0.1 ping statistics --- 00:11:42.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.816 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=926610 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 926610 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 926610 ']' 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.816 12:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.073 [2024-07-22 12:06:50.783912] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:11:43.073 [2024-07-22 12:06:50.783992] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.073 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.073 [2024-07-22 12:06:50.821275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:43.073 [2024-07-22 12:06:50.853448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.073 [2024-07-22 12:06:50.946365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.073 [2024-07-22 12:06:50.946431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.073 [2024-07-22 12:06:50.946447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.073 [2024-07-22 12:06:50.946460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.073 [2024-07-22 12:06:50.946473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.073 [2024-07-22 12:06:50.946566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.073 [2024-07-22 12:06:50.946639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.073 [2024-07-22 12:06:50.946643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.329 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.329 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:43.329 12:06:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 [2024-07-22 12:06:51.098990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 Malloc0 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 Delay0 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 [2024-07-22 12:06:51.167886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.330 12:06:51 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:43.330 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.587 [2024-07-22 12:06:51.274830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:45.483 Initializing NVMe Controllers 00:11:45.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:45.483 controller IO queue size 128 less than required 00:11:45.483 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:45.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:45.483 Initialization complete. Launching workers. 00:11:45.483 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 33306 00:11:45.483 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33370, failed to submit 62 00:11:45.483 success 33310, unsuccess 60, failed 0 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.483 rmmod nvme_tcp 00:11:45.483 rmmod nvme_fabrics 00:11:45.483 rmmod nvme_keyring 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 926610 ']' 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 926610 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 926610 ']' 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 926610 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 926610 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 926610' 00:11:45.483 killing process with pid 926610 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 926610 00:11:45.483 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 926610 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.741 12:06:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.271 12:06:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.271 00:11:48.271 real 0m7.080s 00:11:48.271 user 0m10.215s 00:11:48.271 sys 0m2.465s 00:11:48.271 12:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.271 12:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 ************************************ 00:11:48.271 END TEST nvmf_abort 00:11:48.271 ************************************ 00:11:48.271 12:06:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.271 12:06:55 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:48.271 12:06:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.271 12:06:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.271 12:06:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.271 ************************************ 00:11:48.271 START TEST nvmf_ns_hotplug_stress 00:11:48.271 ************************************ 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:48.271 * Looking for test storage... 00:11:48.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.271 12:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.169 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:50.170 00:11:50.170 --- 10.0.0.2 ping statistics --- 00:11:50.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.170 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:11:50.170 00:11:50.170 --- 10.0.0.1 ping statistics --- 00:11:50.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.170 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=928835 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 928835 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 928835 ']' 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.170 12:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.170 [2024-07-22 12:06:58.036512] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:11:50.170 [2024-07-22 12:06:58.036597] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.170 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.171 [2024-07-22 12:06:58.075329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.429 [2024-07-22 12:06:58.105798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.429 [2024-07-22 12:06:58.199057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.429 [2024-07-22 12:06:58.199123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.429 [2024-07-22 12:06:58.199139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.429 [2024-07-22 12:06:58.199153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.429 [2024-07-22 12:06:58.199165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.429 [2024-07-22 12:06:58.199258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.429 [2024-07-22 12:06:58.199313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.429 [2024-07-22 12:06:58.199316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:50.429 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.687 [2024-07-22 12:06:58.607877] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.945 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:51.203 12:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.460 [2024-07-22 12:06:59.143710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.460 12:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.717 12:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:51.975 Malloc0 00:11:51.975 12:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:52.232 Delay0 00:11:52.232 12:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.488 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:52.745 NULL1 00:11:52.745 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:53.002 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=929245 00:11:53.002 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:53.002 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:53.002 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.002 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.259 12:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.516 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:53.516 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:53.774 true 00:11:53.774 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:53.774 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.031 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.291 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:54.291 12:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:54.291 true 00:11:54.564 12:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:54.564 12:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.128 Read completed with error (sct=0, sc=11) 00:11:55.128 12:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.385 12:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:55.385 12:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:55.642 true 00:11:55.642 12:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:55.642 12:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.899 12:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.155 12:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:56.155 12:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:56.411 true 00:11:56.411 12:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:56.411 12:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.352 12:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.608 12:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:57.608 12:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:57.865 true 00:11:57.865 12:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:57.865 12:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.122 12:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.380 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:58.380 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:58.638 true 00:11:58.638 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:58.638 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.896 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.154 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:59.154 12:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:59.412 true 00:11:59.412 12:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:11:59.412 12:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.347 12:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.606 12:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:00.606 12:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:00.864 true 00:12:00.864 12:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:00.864 12:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.122 12:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.381 12:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:01.381 12:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:01.640 true 00:12:01.640 12:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:01.640 12:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.575 12:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.833 12:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:02.833 12:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:03.090 true 00:12:03.090 12:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:03.090 12:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.348 12:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.606 12:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:03.606 12:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:03.865 true 00:12:03.865 12:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:03.866 12:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.799 12:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.055 12:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:05.055 12:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:05.311 true 00:12:05.311 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:05.311 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.566 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.822 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:05.822 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:06.078 true 00:12:06.078 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:06.078 12:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.650 12:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.906 12:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:06.906 12:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:07.162 true 00:12:07.162 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:07.162 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.417 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.677 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:07.677 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:07.966 true 00:12:07.966 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:07.966 12:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.223 12:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.479 12:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:08.479 12:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:08.735 true 00:12:08.735 12:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:08.735 12:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.664 12:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.921 12:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:09.921 12:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:10.179 true 00:12:10.179 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:10.179 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.436 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.693 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:10.693 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:10.950 true 00:12:10.950 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:10.950 12:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.882 12:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.139 12:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:12.139 12:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:12.396 true 00:12:12.396 12:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:12.396 12:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.653 12:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.911 12:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:12.911 12:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:13.168 true 00:12:13.168 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:13.168 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.425 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.682 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:13.682 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:13.939 true 00:12:13.939 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:13.939 12:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.309 12:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.309 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:15.309 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:15.566 true 00:12:15.566 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:15.566 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.823 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.081 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:16.081 12:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:16.338 true 00:12:16.338 12:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:16.338 12:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.268 12:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.524 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:17.524 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:17.781 true 00:12:17.781 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:17.781 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.037 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.357 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:18.357 12:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:18.357 true 00:12:18.357 12:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:18.357 12:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.290 12:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.548 12:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:19.548 12:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:19.805 true 00:12:19.805 12:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:19.805 12:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.063 12:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.321 12:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:20.321 12:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:20.579 true 00:12:20.579 12:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:20.579 12:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.512 12:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.512 12:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:21.512 12:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:21.767 true 00:12:21.767 12:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:21.767 12:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.023 12:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.278 12:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:22.279 12:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:22.535 true 00:12:22.535 12:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:22.535 12:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.474 Initializing NVMe Controllers 00:12:23.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:23.474 Controller IO queue size 128, less than required. 00:12:23.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:23.474 Controller IO queue size 128, less than required. 00:12:23.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:23.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:23.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:23.474 Initialization complete. Launching workers. 00:12:23.474 ======================================================== 00:12:23.474 Latency(us) 00:12:23.474 Device Information : IOPS MiB/s Average min max 00:12:23.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 536.29 0.26 113537.27 2498.86 1011668.58 00:12:23.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9634.41 4.70 13246.73 4001.56 372688.72 00:12:23.474 ======================================================== 00:12:23.474 Total : 10170.70 4.97 18534.99 2498.86 1011668.58 00:12:23.474 00:12:23.474 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.731 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:12:23.731 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:23.998 true 00:12:23.998 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929245 00:12:23.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (929245) - No such process 00:12:23.998 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 929245 00:12:23.998 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.254 12:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.510 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:24.510 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:24.510 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:24.510 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.510 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:24.766 null0 00:12:24.766 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:24.766 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.766 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:25.024 null1 00:12:25.024 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.024 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.024 12:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:25.280 null2 00:12:25.280 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.280 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.280 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:25.537 null3 00:12:25.537 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.537 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.537 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:25.794 null4 00:12:25.794 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.794 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.794 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:26.051 null5 00:12:26.051 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.051 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.051 12:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:26.309 null6 00:12:26.309 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.309 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.309 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:26.585 null7 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 933298 933299 933300 933302 933305 933307 933309 933311 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.585 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:26.842 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.098 12:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.354 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.354 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.354 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.354 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.355 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.355 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.355 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.355 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.612 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.869 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.126 12:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.384 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.641 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.899 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.156 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.156 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.156 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.156 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.413 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.670 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.926 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.926 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.926 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.926 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.926 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.926 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.927 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.927 12:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.184 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:30.441 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.441 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.441 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:30.441 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.441 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.441 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.697 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:30.697 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.697 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.697 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.697 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.954 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:31.212 12:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.469 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.726 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.984 rmmod nvme_tcp 00:12:31.984 rmmod nvme_fabrics 00:12:31.984 rmmod nvme_keyring 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 928835 ']' 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 928835 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 928835 ']' 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 928835 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 928835 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 928835' 00:12:31.984 killing process with pid 928835 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 928835 00:12:31.984 12:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 928835 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.242 12:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.773 12:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:34.773 00:12:34.773 real 0m46.357s 00:12:34.773 user 3m31.438s 00:12:34.773 sys 0m16.127s 00:12:34.773 12:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.773 12:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.773 ************************************ 00:12:34.773 END TEST nvmf_ns_hotplug_stress 00:12:34.773 ************************************ 00:12:34.773 12:07:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:34.773 12:07:42 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:34.773 12:07:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:34.773 12:07:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.773 12:07:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.773 ************************************ 00:12:34.773 START TEST nvmf_connect_stress 00:12:34.773 ************************************ 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:34.773 * Looking for test storage... 00:12:34.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.773 12:07:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:34.774 12:07:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:12:36.674 00:12:36.674 --- 10.0.0.2 ping statistics --- 00:12:36.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.674 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:12:36.674 00:12:36.674 --- 10.0.0.1 ping statistics --- 00:12:36.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.674 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.674 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=936050 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 936050 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 936050 ']' 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.675 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.675 [2024-07-22 12:07:44.396518] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:12:36.675 [2024-07-22 12:07:44.396625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.675 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.675 [2024-07-22 12:07:44.437391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:36.675 [2024-07-22 12:07:44.464405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.675 [2024-07-22 12:07:44.548878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.675 [2024-07-22 12:07:44.548932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.675 [2024-07-22 12:07:44.548961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.675 [2024-07-22 12:07:44.548972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.675 [2024-07-22 12:07:44.548982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.675 [2024-07-22 12:07:44.549032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.675 [2024-07-22 12:07:44.549093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.675 [2024-07-22 12:07:44.549089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 [2024-07-22 12:07:44.693377] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 [2024-07-22 12:07:44.723049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 NULL1 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=936081 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.933 12:07:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.254 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.254 12:07:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:37.254 12:07:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.254 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.254 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.540 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.540 12:07:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:37.540 12:07:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.540 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.540 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.104 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.104 12:07:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:38.104 12:07:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.104 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.104 12:07:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.360 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.360 12:07:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:38.360 12:07:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.360 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.360 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.618 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.618 12:07:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:38.618 12:07:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.618 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.618 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.875 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.875 12:07:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:38.875 12:07:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.875 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.875 12:07:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.133 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.133 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:39.133 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.133 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.133 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.698 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.698 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:39.698 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.698 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.698 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.956 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.956 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:39.956 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.956 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.956 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.214 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.214 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:40.214 12:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.214 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.214 12:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.472 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.472 12:07:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:40.472 12:07:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.472 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.472 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.730 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.730 12:07:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:40.730 12:07:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.730 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.730 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.295 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.295 12:07:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:41.295 12:07:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.295 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.295 12:07:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.552 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.552 12:07:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:41.552 12:07:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.552 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.552 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.809 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.809 12:07:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:41.809 12:07:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.809 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.809 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.066 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.066 12:07:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:42.066 12:07:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.066 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.066 12:07:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.324 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.324 12:07:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:42.324 12:07:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.324 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.324 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.889 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.889 12:07:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:42.889 12:07:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.889 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.889 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.147 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.147 12:07:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:43.147 12:07:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.147 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.147 12:07:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.405 12:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:43.405 12:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.405 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.405 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.662 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.662 12:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:43.663 12:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.663 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.663 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.228 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.228 12:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:44.228 12:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.228 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.228 12:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.485 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.485 12:07:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:44.485 12:07:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.485 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.485 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.743 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.743 12:07:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:44.743 12:07:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.743 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.743 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.001 12:07:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:45.001 12:07:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.001 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.001 12:07:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.258 12:07:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:45.258 12:07:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.258 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.258 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.822 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.822 12:07:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:45.822 12:07:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.822 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.822 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.079 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.079 12:07:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:46.079 12:07:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.079 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.079 12:07:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.336 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.336 12:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:46.336 12:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.336 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.336 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.593 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.593 12:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:46.593 12:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.593 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.593 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.850 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.850 12:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:46.850 12:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.850 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.850 12:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 936081 00:12:47.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (936081) - No such process 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 936081 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.366 rmmod nvme_tcp 00:12:47.366 rmmod nvme_fabrics 00:12:47.366 rmmod nvme_keyring 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 936050 ']' 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 936050 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 936050 ']' 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 936050 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936050 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936050' 00:12:47.366 killing process with pid 936050 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 936050 00:12:47.366 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 936050 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.625 12:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.526 12:07:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.526 00:12:49.526 real 0m15.258s 00:12:49.526 user 0m38.223s 00:12:49.526 sys 0m5.994s 00:12:49.526 12:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.526 12:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.526 ************************************ 00:12:49.526 END TEST nvmf_connect_stress 00:12:49.526 ************************************ 00:12:49.526 12:07:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.526 12:07:57 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:49.526 12:07:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.526 12:07:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.526 12:07:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.526 ************************************ 00:12:49.526 START TEST nvmf_fused_ordering 00:12:49.526 ************************************ 00:12:49.526 12:07:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:49.785 * Looking for test storage... 00:12:49.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.785 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.786 12:07:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.718 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.719 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.719 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:12:51.719 00:12:51.719 --- 10.0.0.2 ping statistics --- 00:12:51.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.719 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:51.719 00:12:51.719 --- 10.0.0.1 ping statistics --- 00:12:51.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.719 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=939223 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 939223 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 939223 ']' 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.719 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.977 [2024-07-22 12:07:59.663469] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:12:51.977 [2024-07-22 12:07:59.663540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.977 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.977 [2024-07-22 12:07:59.702102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.977 [2024-07-22 12:07:59.730225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.977 [2024-07-22 12:07:59.817252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.977 [2024-07-22 12:07:59.817308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.977 [2024-07-22 12:07:59.817335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.977 [2024-07-22 12:07:59.817346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.977 [2024-07-22 12:07:59.817356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.977 [2024-07-22 12:07:59.817383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.236 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 [2024-07-22 12:07:59.947738] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 [2024-07-22 12:07:59.963920] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 NULL1 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 12:07:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:52.237 [2024-07-22 12:08:00.009321] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:12:52.237 [2024-07-22 12:08:00.009366] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939365 ] 00:12:52.237 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.237 [2024-07-22 12:08:00.044764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:52.803 Attached to nqn.2016-06.io.spdk:cnode1 00:12:52.803 Namespace ID: 1 size: 1GB 00:12:52.803 fused_ordering(0) 00:12:52.803 fused_ordering(1) 00:12:52.803 fused_ordering(2) 00:12:52.803 fused_ordering(3) 00:12:52.803 fused_ordering(4) 00:12:52.803 fused_ordering(5) 00:12:52.803 fused_ordering(6) 00:12:52.803 fused_ordering(7) 00:12:52.803 fused_ordering(8) 00:12:52.803 fused_ordering(9) 00:12:52.803 fused_ordering(10) 00:12:52.803 fused_ordering(11) 00:12:52.803 fused_ordering(12) 00:12:52.803 fused_ordering(13) 00:12:52.803 fused_ordering(14) 00:12:52.803 fused_ordering(15) 00:12:52.803 fused_ordering(16) 00:12:52.803 fused_ordering(17) 00:12:52.803 fused_ordering(18) 00:12:52.803 fused_ordering(19) 00:12:52.803 fused_ordering(20) 00:12:52.803 fused_ordering(21) 00:12:52.803 fused_ordering(22) 00:12:52.803 fused_ordering(23) 00:12:52.803 fused_ordering(24) 00:12:52.803 fused_ordering(25) 00:12:52.803 fused_ordering(26) 00:12:52.803 fused_ordering(27) 00:12:52.803 fused_ordering(28) 00:12:52.803 fused_ordering(29) 00:12:52.803 fused_ordering(30) 00:12:52.803 fused_ordering(31) 00:12:52.803 fused_ordering(32) 00:12:52.803 fused_ordering(33) 00:12:52.803 fused_ordering(34) 00:12:52.803 fused_ordering(35) 00:12:52.803 fused_ordering(36) 00:12:52.803 fused_ordering(37) 00:12:52.803 fused_ordering(38) 00:12:52.803 fused_ordering(39) 00:12:52.803 fused_ordering(40) 00:12:52.803 fused_ordering(41) 00:12:52.803 fused_ordering(42) 00:12:52.803 fused_ordering(43) 00:12:52.803 fused_ordering(44) 00:12:52.803 fused_ordering(45) 00:12:52.803 fused_ordering(46) 00:12:52.803 fused_ordering(47) 00:12:52.803 fused_ordering(48) 00:12:52.803 fused_ordering(49) 00:12:52.803 fused_ordering(50) 00:12:52.803 fused_ordering(51) 00:12:52.803 fused_ordering(52) 00:12:52.803 fused_ordering(53) 00:12:52.803 fused_ordering(54) 00:12:52.803 fused_ordering(55) 00:12:52.803 fused_ordering(56) 00:12:52.803 fused_ordering(57) 00:12:52.803 fused_ordering(58) 00:12:52.803 fused_ordering(59) 00:12:52.803 fused_ordering(60) 00:12:52.803 fused_ordering(61) 00:12:52.803 fused_ordering(62) 00:12:52.803 fused_ordering(63) 00:12:52.803 fused_ordering(64) 00:12:52.803 fused_ordering(65) 00:12:52.803 fused_ordering(66) 00:12:52.803 fused_ordering(67) 00:12:52.803 fused_ordering(68) 00:12:52.803 fused_ordering(69) 00:12:52.803 fused_ordering(70) 00:12:52.803 fused_ordering(71) 00:12:52.803 fused_ordering(72) 00:12:52.803 fused_ordering(73) 00:12:52.803 fused_ordering(74) 00:12:52.803 fused_ordering(75) 00:12:52.803 fused_ordering(76) 00:12:52.803 fused_ordering(77) 00:12:52.803 fused_ordering(78) 00:12:52.803 fused_ordering(79) 00:12:52.803 fused_ordering(80) 00:12:52.803 fused_ordering(81) 00:12:52.803 fused_ordering(82) 00:12:52.803 fused_ordering(83) 00:12:52.803 fused_ordering(84) 00:12:52.803 fused_ordering(85) 00:12:52.803 fused_ordering(86) 00:12:52.803 fused_ordering(87) 00:12:52.803 fused_ordering(88) 00:12:52.803 fused_ordering(89) 00:12:52.803 fused_ordering(90) 00:12:52.803 fused_ordering(91) 00:12:52.803 fused_ordering(92) 00:12:52.803 fused_ordering(93) 00:12:52.803 fused_ordering(94) 00:12:52.803 fused_ordering(95) 00:12:52.803 fused_ordering(96) 00:12:52.803 fused_ordering(97) 00:12:52.803 fused_ordering(98) 00:12:52.803 fused_ordering(99) 00:12:52.803 fused_ordering(100) 00:12:52.803 fused_ordering(101) 00:12:52.803 fused_ordering(102) 00:12:52.803 fused_ordering(103) 00:12:52.803 fused_ordering(104) 00:12:52.803 fused_ordering(105) 00:12:52.803 fused_ordering(106) 00:12:52.803 fused_ordering(107) 00:12:52.803 fused_ordering(108) 00:12:52.803 fused_ordering(109) 00:12:52.803 fused_ordering(110) 00:12:52.803 fused_ordering(111) 00:12:52.803 fused_ordering(112) 00:12:52.803 fused_ordering(113) 00:12:52.803 fused_ordering(114) 00:12:52.803 fused_ordering(115) 00:12:52.803 fused_ordering(116) 00:12:52.803 fused_ordering(117) 00:12:52.803 fused_ordering(118) 00:12:52.803 fused_ordering(119) 00:12:52.803 fused_ordering(120) 00:12:52.803 fused_ordering(121) 00:12:52.803 fused_ordering(122) 00:12:52.803 fused_ordering(123) 00:12:52.803 fused_ordering(124) 00:12:52.803 fused_ordering(125) 00:12:52.803 fused_ordering(126) 00:12:52.803 fused_ordering(127) 00:12:52.803 fused_ordering(128) 00:12:52.803 fused_ordering(129) 00:12:52.803 fused_ordering(130) 00:12:52.803 fused_ordering(131) 00:12:52.803 fused_ordering(132) 00:12:52.803 fused_ordering(133) 00:12:52.803 fused_ordering(134) 00:12:52.804 fused_ordering(135) 00:12:52.804 fused_ordering(136) 00:12:52.804 fused_ordering(137) 00:12:52.804 fused_ordering(138) 00:12:52.804 fused_ordering(139) 00:12:52.804 fused_ordering(140) 00:12:52.804 fused_ordering(141) 00:12:52.804 fused_ordering(142) 00:12:52.804 fused_ordering(143) 00:12:52.804 fused_ordering(144) 00:12:52.804 fused_ordering(145) 00:12:52.804 fused_ordering(146) 00:12:52.804 fused_ordering(147) 00:12:52.804 fused_ordering(148) 00:12:52.804 fused_ordering(149) 00:12:52.804 fused_ordering(150) 00:12:52.804 fused_ordering(151) 00:12:52.804 fused_ordering(152) 00:12:52.804 fused_ordering(153) 00:12:52.804 fused_ordering(154) 00:12:52.804 fused_ordering(155) 00:12:52.804 fused_ordering(156) 00:12:52.804 fused_ordering(157) 00:12:52.804 fused_ordering(158) 00:12:52.804 fused_ordering(159) 00:12:52.804 fused_ordering(160) 00:12:52.804 fused_ordering(161) 00:12:52.804 fused_ordering(162) 00:12:52.804 fused_ordering(163) 00:12:52.804 fused_ordering(164) 00:12:52.804 fused_ordering(165) 00:12:52.804 fused_ordering(166) 00:12:52.804 fused_ordering(167) 00:12:52.804 fused_ordering(168) 00:12:52.804 fused_ordering(169) 00:12:52.804 fused_ordering(170) 00:12:52.804 fused_ordering(171) 00:12:52.804 fused_ordering(172) 00:12:52.804 fused_ordering(173) 00:12:52.804 fused_ordering(174) 00:12:52.804 fused_ordering(175) 00:12:52.804 fused_ordering(176) 00:12:52.804 fused_ordering(177) 00:12:52.804 fused_ordering(178) 00:12:52.804 fused_ordering(179) 00:12:52.804 fused_ordering(180) 00:12:52.804 fused_ordering(181) 00:12:52.804 fused_ordering(182) 00:12:52.804 fused_ordering(183) 00:12:52.804 fused_ordering(184) 00:12:52.804 fused_ordering(185) 00:12:52.804 fused_ordering(186) 00:12:52.804 fused_ordering(187) 00:12:52.804 fused_ordering(188) 00:12:52.804 fused_ordering(189) 00:12:52.804 fused_ordering(190) 00:12:52.804 fused_ordering(191) 00:12:52.804 fused_ordering(192) 00:12:52.804 fused_ordering(193) 00:12:52.804 fused_ordering(194) 00:12:52.804 fused_ordering(195) 00:12:52.804 fused_ordering(196) 00:12:52.804 fused_ordering(197) 00:12:52.804 fused_ordering(198) 00:12:52.804 fused_ordering(199) 00:12:52.804 fused_ordering(200) 00:12:52.804 fused_ordering(201) 00:12:52.804 fused_ordering(202) 00:12:52.804 fused_ordering(203) 00:12:52.804 fused_ordering(204) 00:12:52.804 fused_ordering(205) 00:12:53.062 fused_ordering(206) 00:12:53.062 fused_ordering(207) 00:12:53.062 fused_ordering(208) 00:12:53.062 fused_ordering(209) 00:12:53.062 fused_ordering(210) 00:12:53.062 fused_ordering(211) 00:12:53.062 fused_ordering(212) 00:12:53.062 fused_ordering(213) 00:12:53.062 fused_ordering(214) 00:12:53.062 fused_ordering(215) 00:12:53.062 fused_ordering(216) 00:12:53.062 fused_ordering(217) 00:12:53.062 fused_ordering(218) 00:12:53.062 fused_ordering(219) 00:12:53.062 fused_ordering(220) 00:12:53.062 fused_ordering(221) 00:12:53.062 fused_ordering(222) 00:12:53.062 fused_ordering(223) 00:12:53.062 fused_ordering(224) 00:12:53.062 fused_ordering(225) 00:12:53.062 fused_ordering(226) 00:12:53.062 fused_ordering(227) 00:12:53.062 fused_ordering(228) 00:12:53.062 fused_ordering(229) 00:12:53.062 fused_ordering(230) 00:12:53.062 fused_ordering(231) 00:12:53.062 fused_ordering(232) 00:12:53.062 fused_ordering(233) 00:12:53.062 fused_ordering(234) 00:12:53.062 fused_ordering(235) 00:12:53.062 fused_ordering(236) 00:12:53.062 fused_ordering(237) 00:12:53.062 fused_ordering(238) 00:12:53.062 fused_ordering(239) 00:12:53.062 fused_ordering(240) 00:12:53.062 fused_ordering(241) 00:12:53.062 fused_ordering(242) 00:12:53.062 fused_ordering(243) 00:12:53.062 fused_ordering(244) 00:12:53.062 fused_ordering(245) 00:12:53.062 fused_ordering(246) 00:12:53.062 fused_ordering(247) 00:12:53.062 fused_ordering(248) 00:12:53.062 fused_ordering(249) 00:12:53.062 fused_ordering(250) 00:12:53.062 fused_ordering(251) 00:12:53.062 fused_ordering(252) 00:12:53.062 fused_ordering(253) 00:12:53.062 fused_ordering(254) 00:12:53.062 fused_ordering(255) 00:12:53.062 fused_ordering(256) 00:12:53.062 fused_ordering(257) 00:12:53.062 fused_ordering(258) 00:12:53.062 fused_ordering(259) 00:12:53.062 fused_ordering(260) 00:12:53.062 fused_ordering(261) 00:12:53.062 fused_ordering(262) 00:12:53.062 fused_ordering(263) 00:12:53.062 fused_ordering(264) 00:12:53.062 fused_ordering(265) 00:12:53.062 fused_ordering(266) 00:12:53.062 fused_ordering(267) 00:12:53.062 fused_ordering(268) 00:12:53.062 fused_ordering(269) 00:12:53.062 fused_ordering(270) 00:12:53.062 fused_ordering(271) 00:12:53.062 fused_ordering(272) 00:12:53.062 fused_ordering(273) 00:12:53.062 fused_ordering(274) 00:12:53.062 fused_ordering(275) 00:12:53.062 fused_ordering(276) 00:12:53.062 fused_ordering(277) 00:12:53.062 fused_ordering(278) 00:12:53.062 fused_ordering(279) 00:12:53.062 fused_ordering(280) 00:12:53.062 fused_ordering(281) 00:12:53.062 fused_ordering(282) 00:12:53.062 fused_ordering(283) 00:12:53.062 fused_ordering(284) 00:12:53.062 fused_ordering(285) 00:12:53.062 fused_ordering(286) 00:12:53.062 fused_ordering(287) 00:12:53.062 fused_ordering(288) 00:12:53.062 fused_ordering(289) 00:12:53.062 fused_ordering(290) 00:12:53.062 fused_ordering(291) 00:12:53.063 fused_ordering(292) 00:12:53.063 fused_ordering(293) 00:12:53.063 fused_ordering(294) 00:12:53.063 fused_ordering(295) 00:12:53.063 fused_ordering(296) 00:12:53.063 fused_ordering(297) 00:12:53.063 fused_ordering(298) 00:12:53.063 fused_ordering(299) 00:12:53.063 fused_ordering(300) 00:12:53.063 fused_ordering(301) 00:12:53.063 fused_ordering(302) 00:12:53.063 fused_ordering(303) 00:12:53.063 fused_ordering(304) 00:12:53.063 fused_ordering(305) 00:12:53.063 fused_ordering(306) 00:12:53.063 fused_ordering(307) 00:12:53.063 fused_ordering(308) 00:12:53.063 fused_ordering(309) 00:12:53.063 fused_ordering(310) 00:12:53.063 fused_ordering(311) 00:12:53.063 fused_ordering(312) 00:12:53.063 fused_ordering(313) 00:12:53.063 fused_ordering(314) 00:12:53.063 fused_ordering(315) 00:12:53.063 fused_ordering(316) 00:12:53.063 fused_ordering(317) 00:12:53.063 fused_ordering(318) 00:12:53.063 fused_ordering(319) 00:12:53.063 fused_ordering(320) 00:12:53.063 fused_ordering(321) 00:12:53.063 fused_ordering(322) 00:12:53.063 fused_ordering(323) 00:12:53.063 fused_ordering(324) 00:12:53.063 fused_ordering(325) 00:12:53.063 fused_ordering(326) 00:12:53.063 fused_ordering(327) 00:12:53.063 fused_ordering(328) 00:12:53.063 fused_ordering(329) 00:12:53.063 fused_ordering(330) 00:12:53.063 fused_ordering(331) 00:12:53.063 fused_ordering(332) 00:12:53.063 fused_ordering(333) 00:12:53.063 fused_ordering(334) 00:12:53.063 fused_ordering(335) 00:12:53.063 fused_ordering(336) 00:12:53.063 fused_ordering(337) 00:12:53.063 fused_ordering(338) 00:12:53.063 fused_ordering(339) 00:12:53.063 fused_ordering(340) 00:12:53.063 fused_ordering(341) 00:12:53.063 fused_ordering(342) 00:12:53.063 fused_ordering(343) 00:12:53.063 fused_ordering(344) 00:12:53.063 fused_ordering(345) 00:12:53.063 fused_ordering(346) 00:12:53.063 fused_ordering(347) 00:12:53.063 fused_ordering(348) 00:12:53.063 fused_ordering(349) 00:12:53.063 fused_ordering(350) 00:12:53.063 fused_ordering(351) 00:12:53.063 fused_ordering(352) 00:12:53.063 fused_ordering(353) 00:12:53.063 fused_ordering(354) 00:12:53.063 fused_ordering(355) 00:12:53.063 fused_ordering(356) 00:12:53.063 fused_ordering(357) 00:12:53.063 fused_ordering(358) 00:12:53.063 fused_ordering(359) 00:12:53.063 fused_ordering(360) 00:12:53.063 fused_ordering(361) 00:12:53.063 fused_ordering(362) 00:12:53.063 fused_ordering(363) 00:12:53.063 fused_ordering(364) 00:12:53.063 fused_ordering(365) 00:12:53.063 fused_ordering(366) 00:12:53.063 fused_ordering(367) 00:12:53.063 fused_ordering(368) 00:12:53.063 fused_ordering(369) 00:12:53.063 fused_ordering(370) 00:12:53.063 fused_ordering(371) 00:12:53.063 fused_ordering(372) 00:12:53.063 fused_ordering(373) 00:12:53.063 fused_ordering(374) 00:12:53.063 fused_ordering(375) 00:12:53.063 fused_ordering(376) 00:12:53.063 fused_ordering(377) 00:12:53.063 fused_ordering(378) 00:12:53.063 fused_ordering(379) 00:12:53.063 fused_ordering(380) 00:12:53.063 fused_ordering(381) 00:12:53.063 fused_ordering(382) 00:12:53.063 fused_ordering(383) 00:12:53.063 fused_ordering(384) 00:12:53.063 fused_ordering(385) 00:12:53.063 fused_ordering(386) 00:12:53.063 fused_ordering(387) 00:12:53.063 fused_ordering(388) 00:12:53.063 fused_ordering(389) 00:12:53.063 fused_ordering(390) 00:12:53.063 fused_ordering(391) 00:12:53.063 fused_ordering(392) 00:12:53.063 fused_ordering(393) 00:12:53.063 fused_ordering(394) 00:12:53.063 fused_ordering(395) 00:12:53.063 fused_ordering(396) 00:12:53.063 fused_ordering(397) 00:12:53.063 fused_ordering(398) 00:12:53.063 fused_ordering(399) 00:12:53.063 fused_ordering(400) 00:12:53.063 fused_ordering(401) 00:12:53.063 fused_ordering(402) 00:12:53.063 fused_ordering(403) 00:12:53.063 fused_ordering(404) 00:12:53.063 fused_ordering(405) 00:12:53.063 fused_ordering(406) 00:12:53.063 fused_ordering(407) 00:12:53.063 fused_ordering(408) 00:12:53.063 fused_ordering(409) 00:12:53.063 fused_ordering(410) 00:12:53.629 fused_ordering(411) 00:12:53.629 fused_ordering(412) 00:12:53.629 fused_ordering(413) 00:12:53.629 fused_ordering(414) 00:12:53.629 fused_ordering(415) 00:12:53.629 fused_ordering(416) 00:12:53.629 fused_ordering(417) 00:12:53.629 fused_ordering(418) 00:12:53.629 fused_ordering(419) 00:12:53.629 fused_ordering(420) 00:12:53.629 fused_ordering(421) 00:12:53.629 fused_ordering(422) 00:12:53.629 fused_ordering(423) 00:12:53.629 fused_ordering(424) 00:12:53.629 fused_ordering(425) 00:12:53.629 fused_ordering(426) 00:12:53.629 fused_ordering(427) 00:12:53.629 fused_ordering(428) 00:12:53.629 fused_ordering(429) 00:12:53.629 fused_ordering(430) 00:12:53.629 fused_ordering(431) 00:12:53.629 fused_ordering(432) 00:12:53.629 fused_ordering(433) 00:12:53.629 fused_ordering(434) 00:12:53.629 fused_ordering(435) 00:12:53.629 fused_ordering(436) 00:12:53.629 fused_ordering(437) 00:12:53.629 fused_ordering(438) 00:12:53.629 fused_ordering(439) 00:12:53.629 fused_ordering(440) 00:12:53.629 fused_ordering(441) 00:12:53.629 fused_ordering(442) 00:12:53.629 fused_ordering(443) 00:12:53.629 fused_ordering(444) 00:12:53.629 fused_ordering(445) 00:12:53.629 fused_ordering(446) 00:12:53.629 fused_ordering(447) 00:12:53.629 fused_ordering(448) 00:12:53.629 fused_ordering(449) 00:12:53.629 fused_ordering(450) 00:12:53.629 fused_ordering(451) 00:12:53.629 fused_ordering(452) 00:12:53.629 fused_ordering(453) 00:12:53.629 fused_ordering(454) 00:12:53.629 fused_ordering(455) 00:12:53.629 fused_ordering(456) 00:12:53.629 fused_ordering(457) 00:12:53.629 fused_ordering(458) 00:12:53.629 fused_ordering(459) 00:12:53.629 fused_ordering(460) 00:12:53.629 fused_ordering(461) 00:12:53.629 fused_ordering(462) 00:12:53.629 fused_ordering(463) 00:12:53.629 fused_ordering(464) 00:12:53.629 fused_ordering(465) 00:12:53.629 fused_ordering(466) 00:12:53.629 fused_ordering(467) 00:12:53.629 fused_ordering(468) 00:12:53.629 fused_ordering(469) 00:12:53.629 fused_ordering(470) 00:12:53.629 fused_ordering(471) 00:12:53.629 fused_ordering(472) 00:12:53.629 fused_ordering(473) 00:12:53.629 fused_ordering(474) 00:12:53.629 fused_ordering(475) 00:12:53.629 fused_ordering(476) 00:12:53.629 fused_ordering(477) 00:12:53.629 fused_ordering(478) 00:12:53.629 fused_ordering(479) 00:12:53.629 fused_ordering(480) 00:12:53.629 fused_ordering(481) 00:12:53.629 fused_ordering(482) 00:12:53.629 fused_ordering(483) 00:12:53.629 fused_ordering(484) 00:12:53.629 fused_ordering(485) 00:12:53.629 fused_ordering(486) 00:12:53.629 fused_ordering(487) 00:12:53.629 fused_ordering(488) 00:12:53.629 fused_ordering(489) 00:12:53.629 fused_ordering(490) 00:12:53.629 fused_ordering(491) 00:12:53.629 fused_ordering(492) 00:12:53.629 fused_ordering(493) 00:12:53.629 fused_ordering(494) 00:12:53.629 fused_ordering(495) 00:12:53.629 fused_ordering(496) 00:12:53.629 fused_ordering(497) 00:12:53.630 fused_ordering(498) 00:12:53.630 fused_ordering(499) 00:12:53.630 fused_ordering(500) 00:12:53.630 fused_ordering(501) 00:12:53.630 fused_ordering(502) 00:12:53.630 fused_ordering(503) 00:12:53.630 fused_ordering(504) 00:12:53.630 fused_ordering(505) 00:12:53.630 fused_ordering(506) 00:12:53.630 fused_ordering(507) 00:12:53.630 fused_ordering(508) 00:12:53.630 fused_ordering(509) 00:12:53.630 fused_ordering(510) 00:12:53.630 fused_ordering(511) 00:12:53.630 fused_ordering(512) 00:12:53.630 fused_ordering(513) 00:12:53.630 fused_ordering(514) 00:12:53.630 fused_ordering(515) 00:12:53.630 fused_ordering(516) 00:12:53.630 fused_ordering(517) 00:12:53.630 fused_ordering(518) 00:12:53.630 fused_ordering(519) 00:12:53.630 fused_ordering(520) 00:12:53.630 fused_ordering(521) 00:12:53.630 fused_ordering(522) 00:12:53.630 fused_ordering(523) 00:12:53.630 fused_ordering(524) 00:12:53.630 fused_ordering(525) 00:12:53.630 fused_ordering(526) 00:12:53.630 fused_ordering(527) 00:12:53.630 fused_ordering(528) 00:12:53.630 fused_ordering(529) 00:12:53.630 fused_ordering(530) 00:12:53.630 fused_ordering(531) 00:12:53.630 fused_ordering(532) 00:12:53.630 fused_ordering(533) 00:12:53.630 fused_ordering(534) 00:12:53.630 fused_ordering(535) 00:12:53.630 fused_ordering(536) 00:12:53.630 fused_ordering(537) 00:12:53.630 fused_ordering(538) 00:12:53.630 fused_ordering(539) 00:12:53.630 fused_ordering(540) 00:12:53.630 fused_ordering(541) 00:12:53.630 fused_ordering(542) 00:12:53.630 fused_ordering(543) 00:12:53.630 fused_ordering(544) 00:12:53.630 fused_ordering(545) 00:12:53.630 fused_ordering(546) 00:12:53.630 fused_ordering(547) 00:12:53.630 fused_ordering(548) 00:12:53.630 fused_ordering(549) 00:12:53.630 fused_ordering(550) 00:12:53.630 fused_ordering(551) 00:12:53.630 fused_ordering(552) 00:12:53.630 fused_ordering(553) 00:12:53.630 fused_ordering(554) 00:12:53.630 fused_ordering(555) 00:12:53.630 fused_ordering(556) 00:12:53.630 fused_ordering(557) 00:12:53.630 fused_ordering(558) 00:12:53.630 fused_ordering(559) 00:12:53.630 fused_ordering(560) 00:12:53.630 fused_ordering(561) 00:12:53.630 fused_ordering(562) 00:12:53.630 fused_ordering(563) 00:12:53.630 fused_ordering(564) 00:12:53.630 fused_ordering(565) 00:12:53.630 fused_ordering(566) 00:12:53.630 fused_ordering(567) 00:12:53.630 fused_ordering(568) 00:12:53.630 fused_ordering(569) 00:12:53.630 fused_ordering(570) 00:12:53.630 fused_ordering(571) 00:12:53.630 fused_ordering(572) 00:12:53.630 fused_ordering(573) 00:12:53.630 fused_ordering(574) 00:12:53.630 fused_ordering(575) 00:12:53.630 fused_ordering(576) 00:12:53.630 fused_ordering(577) 00:12:53.630 fused_ordering(578) 00:12:53.630 fused_ordering(579) 00:12:53.630 fused_ordering(580) 00:12:53.630 fused_ordering(581) 00:12:53.630 fused_ordering(582) 00:12:53.630 fused_ordering(583) 00:12:53.630 fused_ordering(584) 00:12:53.630 fused_ordering(585) 00:12:53.630 fused_ordering(586) 00:12:53.630 fused_ordering(587) 00:12:53.630 fused_ordering(588) 00:12:53.630 fused_ordering(589) 00:12:53.630 fused_ordering(590) 00:12:53.630 fused_ordering(591) 00:12:53.630 fused_ordering(592) 00:12:53.630 fused_ordering(593) 00:12:53.630 fused_ordering(594) 00:12:53.630 fused_ordering(595) 00:12:53.630 fused_ordering(596) 00:12:53.630 fused_ordering(597) 00:12:53.630 fused_ordering(598) 00:12:53.630 fused_ordering(599) 00:12:53.630 fused_ordering(600) 00:12:53.630 fused_ordering(601) 00:12:53.630 fused_ordering(602) 00:12:53.630 fused_ordering(603) 00:12:53.630 fused_ordering(604) 00:12:53.630 fused_ordering(605) 00:12:53.630 fused_ordering(606) 00:12:53.630 fused_ordering(607) 00:12:53.630 fused_ordering(608) 00:12:53.630 fused_ordering(609) 00:12:53.630 fused_ordering(610) 00:12:53.630 fused_ordering(611) 00:12:53.630 fused_ordering(612) 00:12:53.630 fused_ordering(613) 00:12:53.630 fused_ordering(614) 00:12:53.630 fused_ordering(615) 00:12:54.194 fused_ordering(616) 00:12:54.194 fused_ordering(617) 00:12:54.194 fused_ordering(618) 00:12:54.194 fused_ordering(619) 00:12:54.194 fused_ordering(620) 00:12:54.194 fused_ordering(621) 00:12:54.194 fused_ordering(622) 00:12:54.194 fused_ordering(623) 00:12:54.194 fused_ordering(624) 00:12:54.194 fused_ordering(625) 00:12:54.194 fused_ordering(626) 00:12:54.194 fused_ordering(627) 00:12:54.194 fused_ordering(628) 00:12:54.194 fused_ordering(629) 00:12:54.194 fused_ordering(630) 00:12:54.194 fused_ordering(631) 00:12:54.194 fused_ordering(632) 00:12:54.194 fused_ordering(633) 00:12:54.194 fused_ordering(634) 00:12:54.194 fused_ordering(635) 00:12:54.194 fused_ordering(636) 00:12:54.194 fused_ordering(637) 00:12:54.194 fused_ordering(638) 00:12:54.194 fused_ordering(639) 00:12:54.194 fused_ordering(640) 00:12:54.194 fused_ordering(641) 00:12:54.194 fused_ordering(642) 00:12:54.194 fused_ordering(643) 00:12:54.194 fused_ordering(644) 00:12:54.194 fused_ordering(645) 00:12:54.194 fused_ordering(646) 00:12:54.194 fused_ordering(647) 00:12:54.194 fused_ordering(648) 00:12:54.194 fused_ordering(649) 00:12:54.194 fused_ordering(650) 00:12:54.194 fused_ordering(651) 00:12:54.194 fused_ordering(652) 00:12:54.194 fused_ordering(653) 00:12:54.194 fused_ordering(654) 00:12:54.194 fused_ordering(655) 00:12:54.194 fused_ordering(656) 00:12:54.194 fused_ordering(657) 00:12:54.194 fused_ordering(658) 00:12:54.194 fused_ordering(659) 00:12:54.194 fused_ordering(660) 00:12:54.194 fused_ordering(661) 00:12:54.194 fused_ordering(662) 00:12:54.194 fused_ordering(663) 00:12:54.194 fused_ordering(664) 00:12:54.194 fused_ordering(665) 00:12:54.194 fused_ordering(666) 00:12:54.194 fused_ordering(667) 00:12:54.194 fused_ordering(668) 00:12:54.194 fused_ordering(669) 00:12:54.194 fused_ordering(670) 00:12:54.194 fused_ordering(671) 00:12:54.194 fused_ordering(672) 00:12:54.194 fused_ordering(673) 00:12:54.194 fused_ordering(674) 00:12:54.194 fused_ordering(675) 00:12:54.194 fused_ordering(676) 00:12:54.194 fused_ordering(677) 00:12:54.194 fused_ordering(678) 00:12:54.194 fused_ordering(679) 00:12:54.194 fused_ordering(680) 00:12:54.194 fused_ordering(681) 00:12:54.194 fused_ordering(682) 00:12:54.194 fused_ordering(683) 00:12:54.194 fused_ordering(684) 00:12:54.194 fused_ordering(685) 00:12:54.194 fused_ordering(686) 00:12:54.194 fused_ordering(687) 00:12:54.194 fused_ordering(688) 00:12:54.194 fused_ordering(689) 00:12:54.194 fused_ordering(690) 00:12:54.194 fused_ordering(691) 00:12:54.194 fused_ordering(692) 00:12:54.194 fused_ordering(693) 00:12:54.194 fused_ordering(694) 00:12:54.194 fused_ordering(695) 00:12:54.194 fused_ordering(696) 00:12:54.194 fused_ordering(697) 00:12:54.194 fused_ordering(698) 00:12:54.194 fused_ordering(699) 00:12:54.194 fused_ordering(700) 00:12:54.194 fused_ordering(701) 00:12:54.194 fused_ordering(702) 00:12:54.194 fused_ordering(703) 00:12:54.194 fused_ordering(704) 00:12:54.194 fused_ordering(705) 00:12:54.194 fused_ordering(706) 00:12:54.194 fused_ordering(707) 00:12:54.194 fused_ordering(708) 00:12:54.194 fused_ordering(709) 00:12:54.194 fused_ordering(710) 00:12:54.194 fused_ordering(711) 00:12:54.194 fused_ordering(712) 00:12:54.194 fused_ordering(713) 00:12:54.194 fused_ordering(714) 00:12:54.194 fused_ordering(715) 00:12:54.194 fused_ordering(716) 00:12:54.194 fused_ordering(717) 00:12:54.194 fused_ordering(718) 00:12:54.194 fused_ordering(719) 00:12:54.194 fused_ordering(720) 00:12:54.194 fused_ordering(721) 00:12:54.194 fused_ordering(722) 00:12:54.194 fused_ordering(723) 00:12:54.194 fused_ordering(724) 00:12:54.194 fused_ordering(725) 00:12:54.194 fused_ordering(726) 00:12:54.194 fused_ordering(727) 00:12:54.194 fused_ordering(728) 00:12:54.194 fused_ordering(729) 00:12:54.194 fused_ordering(730) 00:12:54.194 fused_ordering(731) 00:12:54.194 fused_ordering(732) 00:12:54.194 fused_ordering(733) 00:12:54.194 fused_ordering(734) 00:12:54.194 fused_ordering(735) 00:12:54.194 fused_ordering(736) 00:12:54.194 fused_ordering(737) 00:12:54.194 fused_ordering(738) 00:12:54.194 fused_ordering(739) 00:12:54.194 fused_ordering(740) 00:12:54.194 fused_ordering(741) 00:12:54.194 fused_ordering(742) 00:12:54.194 fused_ordering(743) 00:12:54.194 fused_ordering(744) 00:12:54.194 fused_ordering(745) 00:12:54.194 fused_ordering(746) 00:12:54.194 fused_ordering(747) 00:12:54.194 fused_ordering(748) 00:12:54.194 fused_ordering(749) 00:12:54.194 fused_ordering(750) 00:12:54.194 fused_ordering(751) 00:12:54.194 fused_ordering(752) 00:12:54.194 fused_ordering(753) 00:12:54.194 fused_ordering(754) 00:12:54.194 fused_ordering(755) 00:12:54.194 fused_ordering(756) 00:12:54.194 fused_ordering(757) 00:12:54.194 fused_ordering(758) 00:12:54.194 fused_ordering(759) 00:12:54.194 fused_ordering(760) 00:12:54.194 fused_ordering(761) 00:12:54.194 fused_ordering(762) 00:12:54.194 fused_ordering(763) 00:12:54.195 fused_ordering(764) 00:12:54.195 fused_ordering(765) 00:12:54.195 fused_ordering(766) 00:12:54.195 fused_ordering(767) 00:12:54.195 fused_ordering(768) 00:12:54.195 fused_ordering(769) 00:12:54.195 fused_ordering(770) 00:12:54.195 fused_ordering(771) 00:12:54.195 fused_ordering(772) 00:12:54.195 fused_ordering(773) 00:12:54.195 fused_ordering(774) 00:12:54.195 fused_ordering(775) 00:12:54.195 fused_ordering(776) 00:12:54.195 fused_ordering(777) 00:12:54.195 fused_ordering(778) 00:12:54.195 fused_ordering(779) 00:12:54.195 fused_ordering(780) 00:12:54.195 fused_ordering(781) 00:12:54.195 fused_ordering(782) 00:12:54.195 fused_ordering(783) 00:12:54.195 fused_ordering(784) 00:12:54.195 fused_ordering(785) 00:12:54.195 fused_ordering(786) 00:12:54.195 fused_ordering(787) 00:12:54.195 fused_ordering(788) 00:12:54.195 fused_ordering(789) 00:12:54.195 fused_ordering(790) 00:12:54.195 fused_ordering(791) 00:12:54.195 fused_ordering(792) 00:12:54.195 fused_ordering(793) 00:12:54.195 fused_ordering(794) 00:12:54.195 fused_ordering(795) 00:12:54.195 fused_ordering(796) 00:12:54.195 fused_ordering(797) 00:12:54.195 fused_ordering(798) 00:12:54.195 fused_ordering(799) 00:12:54.195 fused_ordering(800) 00:12:54.195 fused_ordering(801) 00:12:54.195 fused_ordering(802) 00:12:54.195 fused_ordering(803) 00:12:54.195 fused_ordering(804) 00:12:54.195 fused_ordering(805) 00:12:54.195 fused_ordering(806) 00:12:54.195 fused_ordering(807) 00:12:54.195 fused_ordering(808) 00:12:54.195 fused_ordering(809) 00:12:54.195 fused_ordering(810) 00:12:54.195 fused_ordering(811) 00:12:54.195 fused_ordering(812) 00:12:54.195 fused_ordering(813) 00:12:54.195 fused_ordering(814) 00:12:54.195 fused_ordering(815) 00:12:54.195 fused_ordering(816) 00:12:54.195 fused_ordering(817) 00:12:54.195 fused_ordering(818) 00:12:54.195 fused_ordering(819) 00:12:54.195 fused_ordering(820) 00:12:55.127 fused_ordering(821) 00:12:55.127 fused_ordering(822) 00:12:55.127 fused_ordering(823) 00:12:55.127 fused_ordering(824) 00:12:55.127 fused_ordering(825) 00:12:55.127 fused_ordering(826) 00:12:55.127 fused_ordering(827) 00:12:55.127 fused_ordering(828) 00:12:55.127 fused_ordering(829) 00:12:55.127 fused_ordering(830) 00:12:55.127 fused_ordering(831) 00:12:55.127 fused_ordering(832) 00:12:55.127 fused_ordering(833) 00:12:55.127 fused_ordering(834) 00:12:55.127 fused_ordering(835) 00:12:55.127 fused_ordering(836) 00:12:55.127 fused_ordering(837) 00:12:55.127 fused_ordering(838) 00:12:55.127 fused_ordering(839) 00:12:55.127 fused_ordering(840) 00:12:55.127 fused_ordering(841) 00:12:55.127 fused_ordering(842) 00:12:55.127 fused_ordering(843) 00:12:55.127 fused_ordering(844) 00:12:55.127 fused_ordering(845) 00:12:55.127 fused_ordering(846) 00:12:55.127 fused_ordering(847) 00:12:55.127 fused_ordering(848) 00:12:55.127 fused_ordering(849) 00:12:55.127 fused_ordering(850) 00:12:55.127 fused_ordering(851) 00:12:55.127 fused_ordering(852) 00:12:55.127 fused_ordering(853) 00:12:55.127 fused_ordering(854) 00:12:55.127 fused_ordering(855) 00:12:55.127 fused_ordering(856) 00:12:55.127 fused_ordering(857) 00:12:55.127 fused_ordering(858) 00:12:55.127 fused_ordering(859) 00:12:55.127 fused_ordering(860) 00:12:55.127 fused_ordering(861) 00:12:55.127 fused_ordering(862) 00:12:55.127 fused_ordering(863) 00:12:55.127 fused_ordering(864) 00:12:55.127 fused_ordering(865) 00:12:55.127 fused_ordering(866) 00:12:55.127 fused_ordering(867) 00:12:55.127 fused_ordering(868) 00:12:55.127 fused_ordering(869) 00:12:55.127 fused_ordering(870) 00:12:55.127 fused_ordering(871) 00:12:55.127 fused_ordering(872) 00:12:55.127 fused_ordering(873) 00:12:55.127 fused_ordering(874) 00:12:55.127 fused_ordering(875) 00:12:55.127 fused_ordering(876) 00:12:55.127 fused_ordering(877) 00:12:55.127 fused_ordering(878) 00:12:55.127 fused_ordering(879) 00:12:55.127 fused_ordering(880) 00:12:55.127 fused_ordering(881) 00:12:55.127 fused_ordering(882) 00:12:55.127 fused_ordering(883) 00:12:55.127 fused_ordering(884) 00:12:55.127 fused_ordering(885) 00:12:55.127 fused_ordering(886) 00:12:55.127 fused_ordering(887) 00:12:55.127 fused_ordering(888) 00:12:55.127 fused_ordering(889) 00:12:55.127 fused_ordering(890) 00:12:55.127 fused_ordering(891) 00:12:55.127 fused_ordering(892) 00:12:55.127 fused_ordering(893) 00:12:55.127 fused_ordering(894) 00:12:55.127 fused_ordering(895) 00:12:55.127 fused_ordering(896) 00:12:55.127 fused_ordering(897) 00:12:55.127 fused_ordering(898) 00:12:55.127 fused_ordering(899) 00:12:55.128 fused_ordering(900) 00:12:55.128 fused_ordering(901) 00:12:55.128 fused_ordering(902) 00:12:55.128 fused_ordering(903) 00:12:55.128 fused_ordering(904) 00:12:55.128 fused_ordering(905) 00:12:55.128 fused_ordering(906) 00:12:55.128 fused_ordering(907) 00:12:55.128 fused_ordering(908) 00:12:55.128 fused_ordering(909) 00:12:55.128 fused_ordering(910) 00:12:55.128 fused_ordering(911) 00:12:55.128 fused_ordering(912) 00:12:55.128 fused_ordering(913) 00:12:55.128 fused_ordering(914) 00:12:55.128 fused_ordering(915) 00:12:55.128 fused_ordering(916) 00:12:55.128 fused_ordering(917) 00:12:55.128 fused_ordering(918) 00:12:55.128 fused_ordering(919) 00:12:55.128 fused_ordering(920) 00:12:55.128 fused_ordering(921) 00:12:55.128 fused_ordering(922) 00:12:55.128 fused_ordering(923) 00:12:55.128 fused_ordering(924) 00:12:55.128 fused_ordering(925) 00:12:55.128 fused_ordering(926) 00:12:55.128 fused_ordering(927) 00:12:55.128 fused_ordering(928) 00:12:55.128 fused_ordering(929) 00:12:55.128 fused_ordering(930) 00:12:55.128 fused_ordering(931) 00:12:55.128 fused_ordering(932) 00:12:55.128 fused_ordering(933) 00:12:55.128 fused_ordering(934) 00:12:55.128 fused_ordering(935) 00:12:55.128 fused_ordering(936) 00:12:55.128 fused_ordering(937) 00:12:55.128 fused_ordering(938) 00:12:55.128 fused_ordering(939) 00:12:55.128 fused_ordering(940) 00:12:55.128 fused_ordering(941) 00:12:55.128 fused_ordering(942) 00:12:55.128 fused_ordering(943) 00:12:55.128 fused_ordering(944) 00:12:55.128 fused_ordering(945) 00:12:55.128 fused_ordering(946) 00:12:55.128 fused_ordering(947) 00:12:55.128 fused_ordering(948) 00:12:55.128 fused_ordering(949) 00:12:55.128 fused_ordering(950) 00:12:55.128 fused_ordering(951) 00:12:55.128 fused_ordering(952) 00:12:55.128 fused_ordering(953) 00:12:55.128 fused_ordering(954) 00:12:55.128 fused_ordering(955) 00:12:55.128 fused_ordering(956) 00:12:55.128 fused_ordering(957) 00:12:55.128 fused_ordering(958) 00:12:55.128 fused_ordering(959) 00:12:55.128 fused_ordering(960) 00:12:55.128 fused_ordering(961) 00:12:55.128 fused_ordering(962) 00:12:55.128 fused_ordering(963) 00:12:55.128 fused_ordering(964) 00:12:55.128 fused_ordering(965) 00:12:55.128 fused_ordering(966) 00:12:55.128 fused_ordering(967) 00:12:55.128 fused_ordering(968) 00:12:55.128 fused_ordering(969) 00:12:55.128 fused_ordering(970) 00:12:55.128 fused_ordering(971) 00:12:55.128 fused_ordering(972) 00:12:55.128 fused_ordering(973) 00:12:55.128 fused_ordering(974) 00:12:55.128 fused_ordering(975) 00:12:55.128 fused_ordering(976) 00:12:55.128 fused_ordering(977) 00:12:55.128 fused_ordering(978) 00:12:55.128 fused_ordering(979) 00:12:55.128 fused_ordering(980) 00:12:55.128 fused_ordering(981) 00:12:55.128 fused_ordering(982) 00:12:55.128 fused_ordering(983) 00:12:55.128 fused_ordering(984) 00:12:55.128 fused_ordering(985) 00:12:55.128 fused_ordering(986) 00:12:55.128 fused_ordering(987) 00:12:55.128 fused_ordering(988) 00:12:55.128 fused_ordering(989) 00:12:55.128 fused_ordering(990) 00:12:55.128 fused_ordering(991) 00:12:55.128 fused_ordering(992) 00:12:55.128 fused_ordering(993) 00:12:55.128 fused_ordering(994) 00:12:55.128 fused_ordering(995) 00:12:55.128 fused_ordering(996) 00:12:55.128 fused_ordering(997) 00:12:55.128 fused_ordering(998) 00:12:55.128 fused_ordering(999) 00:12:55.128 fused_ordering(1000) 00:12:55.128 fused_ordering(1001) 00:12:55.128 fused_ordering(1002) 00:12:55.128 fused_ordering(1003) 00:12:55.128 fused_ordering(1004) 00:12:55.128 fused_ordering(1005) 00:12:55.128 fused_ordering(1006) 00:12:55.128 fused_ordering(1007) 00:12:55.128 fused_ordering(1008) 00:12:55.128 fused_ordering(1009) 00:12:55.128 fused_ordering(1010) 00:12:55.128 fused_ordering(1011) 00:12:55.128 fused_ordering(1012) 00:12:55.128 fused_ordering(1013) 00:12:55.128 fused_ordering(1014) 00:12:55.128 fused_ordering(1015) 00:12:55.128 fused_ordering(1016) 00:12:55.128 fused_ordering(1017) 00:12:55.128 fused_ordering(1018) 00:12:55.128 fused_ordering(1019) 00:12:55.128 fused_ordering(1020) 00:12:55.128 fused_ordering(1021) 00:12:55.128 fused_ordering(1022) 00:12:55.128 fused_ordering(1023) 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.128 rmmod nvme_tcp 00:12:55.128 rmmod nvme_fabrics 00:12:55.128 rmmod nvme_keyring 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 939223 ']' 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 939223 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 939223 ']' 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 939223 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939223 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939223' 00:12:55.128 killing process with pid 939223 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 939223 00:12:55.128 12:08:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 939223 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.128 12:08:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.655 12:08:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.655 00:12:57.655 real 0m7.622s 00:12:57.655 user 0m5.175s 00:12:57.655 sys 0m3.448s 00:12:57.655 12:08:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.655 12:08:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:57.656 ************************************ 00:12:57.656 END TEST nvmf_fused_ordering 00:12:57.656 ************************************ 00:12:57.656 12:08:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.656 12:08:05 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:57.656 12:08:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.656 12:08:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.656 12:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.656 ************************************ 00:12:57.656 START TEST nvmf_delete_subsystem 00:12:57.656 ************************************ 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:57.656 * Looking for test storage... 00:12:57.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.656 12:08:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:59.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:59.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:59.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:59.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:12:59.555 00:12:59.555 --- 10.0.0.2 ping statistics --- 00:12:59.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.555 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:12:59.555 00:12:59.555 --- 10.0.0.1 ping statistics --- 00:12:59.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.555 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.555 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=941568 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 941568 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 941568 ']' 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.556 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 [2024-07-22 12:08:07.346047] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:12:59.556 [2024-07-22 12:08:07.346148] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.556 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.556 [2024-07-22 12:08:07.384002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:59.556 [2024-07-22 12:08:07.414253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:59.814 [2024-07-22 12:08:07.504375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.814 [2024-07-22 12:08:07.504430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.814 [2024-07-22 12:08:07.504446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.814 [2024-07-22 12:08:07.504460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.814 [2024-07-22 12:08:07.504471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.814 [2024-07-22 12:08:07.504552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.814 [2024-07-22 12:08:07.504558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.814 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.814 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 [2024-07-22 12:08:07.650487] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 [2024-07-22 12:08:07.666728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 NULL1 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 Delay0 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=941661 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:59.815 12:08:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:59.815 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.815 [2024-07-22 12:08:07.741569] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:02.340 12:08:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.340 12:08:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.340 12:08:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 starting I/O failed: -6 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.340 starting I/O failed: -6 00:13:02.340 Read completed with error (sct=0, sc=8) 00:13:02.340 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 [2024-07-22 12:08:09.840568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f47f000d310 is same with the state(5) to be set 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 Read completed with error (sct=0, sc=8) 00:13:02.341 Write completed with error (sct=0, sc=8) 00:13:02.341 starting I/O failed: -6 00:13:02.341 starting I/O failed: -6 00:13:02.341 starting I/O failed: -6 00:13:02.904 [2024-07-22 12:08:10.798104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bb40 is same with the state(5) to be set 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 [2024-07-22 12:08:10.839977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e100 is same with the state(5) to be set 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 [2024-07-22 12:08:10.840264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104dd40 is same with the state(5) to be set 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 [2024-07-22 12:08:10.842542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f47f000d630 is same with the state(5) to be set 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Write completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 Read completed with error (sct=0, sc=8) 00:13:03.162 [2024-07-22 12:08:10.843306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f47f000cff0 is same with the state(5) to be set 00:13:03.162 Initializing NVMe Controllers 00:13:03.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.162 Controller IO queue size 128, less than required. 00:13:03.162 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:03.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:03.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:03.162 Initialization complete. Launching workers. 00:13:03.162 ======================================================== 00:13:03.162 Latency(us) 00:13:03.162 Device Information : IOPS MiB/s Average min max 00:13:03.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.63 0.09 905184.97 719.13 1011917.17 00:13:03.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.69 0.09 881911.61 758.14 1011494.34 00:13:03.162 ======================================================== 00:13:03.162 Total : 361.32 0.18 893804.04 719.13 1011917.17 00:13:03.162 00:13:03.162 [2024-07-22 12:08:10.843841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bb40 (9): Bad file descriptor 00:13:03.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:03.162 12:08:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.162 12:08:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:03.162 12:08:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 941661 00:13:03.162 12:08:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 941661 00:13:03.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (941661) - No such process 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 941661 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 941661 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 941661 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:03.725 [2024-07-22 12:08:11.367788] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=942116 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.725 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:03.725 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.725 [2024-07-22 12:08:11.435156] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:03.982 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.982 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:03.982 12:08:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:04.544 12:08:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:04.544 12:08:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:04.544 12:08:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:05.106 12:08:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:05.106 12:08:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:05.106 12:08:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:05.669 12:08:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:05.669 12:08:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:05.669 12:08:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.231 12:08:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.231 12:08:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:06.231 12:08:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.487 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.487 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:06.487 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.744 Initializing NVMe Controllers 00:13:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.744 Controller IO queue size 128, less than required. 00:13:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:06.744 Initialization complete. Launching workers. 00:13:06.744 ======================================================== 00:13:06.744 Latency(us) 00:13:06.744 Device Information : IOPS MiB/s Average min max 00:13:06.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003148.23 1000197.67 1010440.70 00:13:06.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004970.05 1000459.55 1011564.48 00:13:06.744 ======================================================== 00:13:06.744 Total : 256.00 0.12 1004059.14 1000197.67 1011564.48 00:13:06.744 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 942116 00:13:07.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (942116) - No such process 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 942116 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.002 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.002 rmmod nvme_tcp 00:13:07.002 rmmod nvme_fabrics 00:13:07.259 rmmod nvme_keyring 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 941568 ']' 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 941568 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 941568 ']' 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 941568 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941568 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941568' 00:13:07.259 killing process with pid 941568 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 941568 00:13:07.259 12:08:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 941568 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.518 12:08:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.429 12:08:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.429 00:13:09.429 real 0m12.157s 00:13:09.429 user 0m27.674s 00:13:09.429 sys 0m2.814s 00:13:09.429 12:08:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.429 12:08:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.429 ************************************ 00:13:09.429 END TEST nvmf_delete_subsystem 00:13:09.429 ************************************ 00:13:09.429 12:08:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:09.429 12:08:17 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:09.429 12:08:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:09.429 12:08:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.429 12:08:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.429 ************************************ 00:13:09.429 START TEST nvmf_ns_masking 00:13:09.429 ************************************ 00:13:09.429 12:08:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:09.738 * Looking for test storage... 00:13:09.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7b98b1a5-49e2-42c6-8b74-410044dbad40 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f6d252eb-251a-4307-8d87-b132bcad55df 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:09.738 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4614d6f6-e7c6-4d3a-a6b2-8e8aa12d16b8 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.739 12:08:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:11.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:11.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:11.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:11.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:11.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:13:11.639 00:13:11.639 --- 10.0.0.2 ping statistics --- 00:13:11.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.639 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:13:11.639 00:13:11.639 --- 10.0.0.1 ping statistics --- 00:13:11.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.639 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.639 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:11.640 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:11.897 12:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:11.897 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=944455 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 944455 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 944455 ']' 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.898 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:11.898 [2024-07-22 12:08:19.637309] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:13:11.898 [2024-07-22 12:08:19.637390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.898 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.898 [2024-07-22 12:08:19.676402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:11.898 [2024-07-22 12:08:19.702524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.898 [2024-07-22 12:08:19.790444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.898 [2024-07-22 12:08:19.790507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.898 [2024-07-22 12:08:19.790519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.898 [2024-07-22 12:08:19.790530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.898 [2024-07-22 12:08:19.790539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.898 [2024-07-22 12:08:19.790575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.155 12:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:12.411 [2024-07-22 12:08:20.165827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.411 12:08:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:12.411 12:08:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:12.411 12:08:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:12.668 Malloc1 00:13:12.668 12:08:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:12.925 Malloc2 00:13:12.925 12:08:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:13.182 12:08:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:13.439 12:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.696 [2024-07-22 12:08:21.441652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4614d6f6-e7c6-4d3a-a6b2-8e8aa12d16b8 -a 10.0.0.2 -s 4420 -i 4 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:13.696 12:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.220 [ 0]:0x1 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce2ac399dadb4d14a8cd0c1dde6cc450 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce2ac399dadb4d14a8cd0c1dde6cc450 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.220 [ 0]:0x1 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce2ac399dadb4d14a8cd0c1dde6cc450 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce2ac399dadb4d14a8cd0c1dde6cc450 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.220 [ 1]:0x2 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.220 12:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.220 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:16.220 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.220 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:16.220 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.477 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.735 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4614d6f6-e7c6-4d3a-a6b2-8e8aa12d16b8 -a 10.0.0.2 -s 4420 -i 4 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:16.993 12:08:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.520 12:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.520 [ 0]:0x2 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.520 [ 0]:0x1 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce2ac399dadb4d14a8cd0c1dde6cc450 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce2ac399dadb4d14a8cd0c1dde6cc450 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.520 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.520 [ 1]:0x2 00:13:19.778 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.778 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.778 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:19.778 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.778 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:20.035 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.036 [ 0]:0x2 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.036 12:08:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.294 12:08:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:20.294 12:08:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4614d6f6-e7c6-4d3a-a6b2-8e8aa12d16b8 -a 10.0.0.2 -s 4420 -i 4 00:13:20.551 12:08:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:20.551 12:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.551 12:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.551 12:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:20.551 12:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:20.551 12:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:22.443 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:22.700 [ 0]:0x1 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce2ac399dadb4d14a8cd0c1dde6cc450 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce2ac399dadb4d14a8cd0c1dde6cc450 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.700 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:22.956 [ 1]:0x2 00:13:22.956 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:22.956 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.956 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:22.957 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.957 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:23.214 [ 0]:0x2 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.214 12:08:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:23.214 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:23.484 [2024-07-22 12:08:31.243247] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:23.484 request: 00:13:23.484 { 00:13:23.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.484 "nsid": 2, 00:13:23.484 "host": "nqn.2016-06.io.spdk:host1", 00:13:23.484 "method": "nvmf_ns_remove_host", 00:13:23.484 "req_id": 1 00:13:23.484 } 00:13:23.484 Got JSON-RPC error response 00:13:23.484 response: 00:13:23.484 { 00:13:23.484 "code": -32602, 00:13:23.484 "message": "Invalid parameters" 00:13:23.484 } 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:23.484 [ 0]:0x2 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=778ed9de7f7d416c9747ec76bc3bca66 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 778ed9de7f7d416c9747ec76bc3bca66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=945947 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 945947 /var/tmp/host.sock 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 945947 ']' 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:23.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:23.484 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.485 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:23.742 [2024-07-22 12:08:31.450449] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:13:23.742 [2024-07-22 12:08:31.450532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945947 ] 00:13:23.742 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.742 [2024-07-22 12:08:31.485910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:23.742 [2024-07-22 12:08:31.518498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.742 [2024-07-22 12:08:31.615377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.999 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.999 12:08:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:23.999 12:08:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.255 12:08:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:24.819 12:08:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7b98b1a5-49e2-42c6-8b74-410044dbad40 00:13:24.819 12:08:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:24.819 12:08:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B98B1A549E242C68B74410044DBAD40 -i 00:13:25.076 12:08:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f6d252eb-251a-4307-8d87-b132bcad55df 00:13:25.076 12:08:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:25.076 12:08:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F6D252EB251A43078D87B132BCAD55DF -i 00:13:25.076 12:08:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:25.639 12:08:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:25.639 12:08:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:25.639 12:08:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:26.229 nvme0n1 00:13:26.229 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:26.229 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:26.494 nvme1n2 00:13:26.494 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:26.494 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:26.494 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:26.494 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:26.494 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:26.750 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:26.750 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:26.750 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:26.750 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:27.008 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7b98b1a5-49e2-42c6-8b74-410044dbad40 == \7\b\9\8\b\1\a\5\-\4\9\e\2\-\4\2\c\6\-\8\b\7\4\-\4\1\0\0\4\4\d\b\a\d\4\0 ]] 00:13:27.008 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:27.008 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:27.008 12:08:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f6d252eb-251a-4307-8d87-b132bcad55df == \f\6\d\2\5\2\e\b\-\2\5\1\a\-\4\3\0\7\-\8\d\8\7\-\b\1\3\2\b\c\a\d\5\5\d\f ]] 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 945947 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 945947 ']' 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 945947 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 945947 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 945947' 00:13:27.266 killing process with pid 945947 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 945947 00:13:27.266 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 945947 00:13:27.830 12:08:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.087 rmmod nvme_tcp 00:13:28.087 rmmod nvme_fabrics 00:13:28.087 rmmod nvme_keyring 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 944455 ']' 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 944455 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 944455 ']' 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 944455 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944455 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:28.087 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944455' 00:13:28.087 killing process with pid 944455 00:13:28.088 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 944455 00:13:28.088 12:08:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 944455 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.346 12:08:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.924 12:08:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.924 00:13:30.924 real 0m20.927s 00:13:30.924 user 0m27.172s 00:13:30.924 sys 0m4.131s 00:13:30.924 12:08:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.924 12:08:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:30.924 ************************************ 00:13:30.924 END TEST nvmf_ns_masking 00:13:30.924 ************************************ 00:13:30.924 12:08:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.924 12:08:38 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:30.924 12:08:38 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:30.924 12:08:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.924 12:08:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.924 12:08:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.924 ************************************ 00:13:30.924 START TEST nvmf_nvme_cli 00:13:30.924 ************************************ 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:30.924 * Looking for test storage... 00:13:30.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.924 12:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:32.830 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:32.830 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.830 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:32.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:32.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:13:32.831 00:13:32.831 --- 10.0.0.2 ping statistics --- 00:13:32.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.831 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:32.831 00:13:32.831 --- 10.0.0.1 ping statistics --- 00:13:32.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.831 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=948503 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 948503 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 948503 ']' 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.831 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.831 [2024-07-22 12:08:40.612488] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:13:32.831 [2024-07-22 12:08:40.612565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.831 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.831 [2024-07-22 12:08:40.649934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:32.831 [2024-07-22 12:08:40.681992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.090 [2024-07-22 12:08:40.771609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.090 [2024-07-22 12:08:40.771677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.090 [2024-07-22 12:08:40.771695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.090 [2024-07-22 12:08:40.771706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.090 [2024-07-22 12:08:40.771716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.090 [2024-07-22 12:08:40.771765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.090 [2024-07-22 12:08:40.771835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.090 [2024-07-22 12:08:40.771899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.090 [2024-07-22 12:08:40.771901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 [2024-07-22 12:08:40.926511] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 Malloc0 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 Malloc1 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.090 [2024-07-22 12:08:41.012071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.090 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:33.349 00:13:33.349 Discovery Log Number of Records 2, Generation counter 2 00:13:33.349 =====Discovery Log Entry 0====== 00:13:33.349 trtype: tcp 00:13:33.349 adrfam: ipv4 00:13:33.349 subtype: current discovery subsystem 00:13:33.349 treq: not required 00:13:33.349 portid: 0 00:13:33.349 trsvcid: 4420 00:13:33.349 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:33.349 traddr: 10.0.0.2 00:13:33.349 eflags: explicit discovery connections, duplicate discovery information 00:13:33.349 sectype: none 00:13:33.349 =====Discovery Log Entry 1====== 00:13:33.349 trtype: tcp 00:13:33.349 adrfam: ipv4 00:13:33.349 subtype: nvme subsystem 00:13:33.349 treq: not required 00:13:33.349 portid: 0 00:13:33.349 trsvcid: 4420 00:13:33.349 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:33.349 traddr: 10.0.0.2 00:13:33.349 eflags: none 00:13:33.349 sectype: none 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:33.349 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.915 12:08:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:33.915 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.915 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.915 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:33.915 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:33.915 12:08:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:36.439 /dev/nvme0n1 ]] 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.439 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.440 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.440 rmmod nvme_tcp 00:13:36.696 rmmod nvme_fabrics 00:13:36.696 rmmod nvme_keyring 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 948503 ']' 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 948503 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 948503 ']' 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 948503 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948503 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948503' 00:13:36.696 killing process with pid 948503 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 948503 00:13:36.696 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 948503 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.954 12:08:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.852 12:08:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.852 00:13:38.852 real 0m8.481s 00:13:38.852 user 0m16.140s 00:13:38.852 sys 0m2.260s 00:13:39.110 12:08:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.110 12:08:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.110 ************************************ 00:13:39.110 END TEST nvmf_nvme_cli 00:13:39.110 ************************************ 00:13:39.110 12:08:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:39.110 12:08:46 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:39.110 12:08:46 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:39.110 12:08:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.110 12:08:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.110 12:08:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.110 ************************************ 00:13:39.110 START TEST nvmf_vfio_user 00:13:39.110 ************************************ 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:39.110 * Looking for test storage... 00:13:39.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.110 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=949364 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 949364' 00:13:39.111 Process pid: 949364 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 949364 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 949364 ']' 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.111 12:08:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:39.111 [2024-07-22 12:08:46.945282] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:13:39.111 [2024-07-22 12:08:46.945364] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.111 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.111 [2024-07-22 12:08:46.981635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:39.111 [2024-07-22 12:08:47.032357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.368 [2024-07-22 12:08:47.140842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.368 [2024-07-22 12:08:47.140916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.368 [2024-07-22 12:08:47.140949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.368 [2024-07-22 12:08:47.140976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.368 [2024-07-22 12:08:47.141000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.368 [2024-07-22 12:08:47.141074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.368 [2024-07-22 12:08:47.141133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.368 [2024-07-22 12:08:47.141168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.368 [2024-07-22 12:08:47.141181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.624 12:08:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.624 12:08:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:39.624 12:08:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:40.556 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:40.814 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:40.814 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:40.814 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:40.814 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:40.814 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:41.071 Malloc1 00:13:41.071 12:08:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:41.326 12:08:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:41.583 12:08:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:41.839 12:08:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.839 12:08:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:41.839 12:08:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:42.096 Malloc2 00:13:42.096 12:08:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:42.352 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:42.608 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:42.866 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:42.866 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:42.866 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.866 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:42.866 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:42.866 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:42.866 [2024-07-22 12:08:50.609887] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:13:42.866 [2024-07-22 12:08:50.609941] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949787 ] 00:13:42.866 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.866 [2024-07-22 12:08:50.627353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.866 [2024-07-22 12:08:50.644995] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:42.866 [2024-07-22 12:08:50.647484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:42.866 [2024-07-22 12:08:50.647514] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9f9e09e000 00:13:42.866 [2024-07-22 12:08:50.648470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.866 [2024-07-22 12:08:50.649471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.866 [2024-07-22 12:08:50.650475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.866 [2024-07-22 12:08:50.651480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:42.866 [2024-07-22 12:08:50.652483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:42.866 [2024-07-22 12:08:50.653488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.866 [2024-07-22 12:08:50.654492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:42.867 [2024-07-22 12:08:50.655498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.867 [2024-07-22 12:08:50.656505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:42.867 [2024-07-22 12:08:50.656525] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9f9ce60000 00:13:42.867 [2024-07-22 12:08:50.657686] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:42.867 [2024-07-22 12:08:50.672258] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:42.867 [2024-07-22 12:08:50.672291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:42.867 [2024-07-22 12:08:50.677651] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:42.867 [2024-07-22 12:08:50.677720] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:42.867 [2024-07-22 12:08:50.677821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:42.867 [2024-07-22 12:08:50.677858] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:42.867 [2024-07-22 12:08:50.677870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:42.867 [2024-07-22 12:08:50.679623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:42.867 [2024-07-22 12:08:50.679648] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:42.867 [2024-07-22 12:08:50.679662] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:42.867 [2024-07-22 12:08:50.680653] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:42.867 [2024-07-22 12:08:50.680677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:42.867 [2024-07-22 12:08:50.680703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:42.867 [2024-07-22 12:08:50.681655] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:42.867 [2024-07-22 12:08:50.681675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:42.867 [2024-07-22 12:08:50.682680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:42.867 [2024-07-22 12:08:50.682700] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:42.867 [2024-07-22 12:08:50.682709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:42.867 [2024-07-22 12:08:50.682720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:42.867 [2024-07-22 12:08:50.682830] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:42.867 [2024-07-22 12:08:50.682839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:42.867 [2024-07-22 12:08:50.682848] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:42.867 [2024-07-22 12:08:50.683672] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:42.867 [2024-07-22 12:08:50.684675] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:42.867 [2024-07-22 12:08:50.685688] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:42.867 [2024-07-22 12:08:50.686678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.867 [2024-07-22 12:08:50.686807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:42.867 [2024-07-22 12:08:50.687695] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:42.867 [2024-07-22 12:08:50.687713] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:42.867 [2024-07-22 12:08:50.687722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.687746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:42.867 [2024-07-22 12:08:50.687759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.687790] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:42.867 [2024-07-22 12:08:50.687800] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.867 [2024-07-22 12:08:50.687807] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.867 [2024-07-22 12:08:50.687829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.867 [2024-07-22 12:08:50.687916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:42.867 [2024-07-22 12:08:50.687943] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:42.867 [2024-07-22 12:08:50.687952] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:42.867 [2024-07-22 12:08:50.687960] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:42.867 [2024-07-22 12:08:50.687968] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:42.867 [2024-07-22 12:08:50.687976] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:42.867 [2024-07-22 12:08:50.687984] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:42.867 [2024-07-22 12:08:50.687991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:42.867 [2024-07-22 12:08:50.688034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:42.867 [2024-07-22 12:08:50.688056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.867 [2024-07-22 12:08:50.688069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.867 [2024-07-22 12:08:50.688081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.867 [2024-07-22 12:08:50.688092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.867 [2024-07-22 12:08:50.688100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:42.867 [2024-07-22 12:08:50.688139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:42.867 [2024-07-22 12:08:50.688150] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:42.867 [2024-07-22 12:08:50.688158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:42.867 [2024-07-22 12:08:50.688204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:42.867 [2024-07-22 12:08:50.688269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688302] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:42.867 [2024-07-22 12:08:50.688309] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:42.867 [2024-07-22 12:08:50.688315] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.867 [2024-07-22 12:08:50.688324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:42.867 [2024-07-22 12:08:50.688338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:42.867 [2024-07-22 12:08:50.688362] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:42.867 [2024-07-22 12:08:50.688380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688407] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:42.867 [2024-07-22 12:08:50.688414] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.867 [2024-07-22 12:08:50.688420] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.867 [2024-07-22 12:08:50.688429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.867 [2024-07-22 12:08:50.688448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:42.867 [2024-07-22 12:08:50.688471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:42.867 [2024-07-22 12:08:50.688497] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:42.867 [2024-07-22 12:08:50.688505] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.867 [2024-07-22 12:08:50.688510] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.867 [2024-07-22 12:08:50.688519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.688548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688641] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:42.868 [2024-07-22 12:08:50.688648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:42.868 [2024-07-22 12:08:50.688657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:42.868 [2024-07-22 12:08:50.688686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.688725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.688753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.688781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.688815] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:42.868 [2024-07-22 12:08:50.688825] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:42.868 [2024-07-22 12:08:50.688832] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:42.868 [2024-07-22 12:08:50.688838] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:42.868 [2024-07-22 12:08:50.688844] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:42.868 [2024-07-22 12:08:50.688853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:42.868 [2024-07-22 12:08:50.688864] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:42.868 [2024-07-22 12:08:50.688871] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:42.868 [2024-07-22 12:08:50.688877] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.868 [2024-07-22 12:08:50.688886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688896] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:42.868 [2024-07-22 12:08:50.688919] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.868 [2024-07-22 12:08:50.688925] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.868 [2024-07-22 12:08:50.688934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688946] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:42.868 [2024-07-22 12:08:50.688957] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:42.868 [2024-07-22 12:08:50.688963] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.868 [2024-07-22 12:08:50.688972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:42.868 [2024-07-22 12:08:50.688983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.689002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.689018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:42.868 [2024-07-22 12:08:50.689029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:42.868 ===================================================== 00:13:42.868 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.868 ===================================================== 00:13:42.868 Controller Capabilities/Features 00:13:42.868 ================================ 00:13:42.868 Vendor ID: 4e58 00:13:42.868 Subsystem Vendor ID: 4e58 00:13:42.868 Serial Number: SPDK1 00:13:42.868 Model Number: SPDK bdev Controller 00:13:42.868 Firmware Version: 24.09 00:13:42.868 Recommended Arb Burst: 6 00:13:42.868 IEEE OUI Identifier: 8d 6b 50 00:13:42.868 Multi-path I/O 00:13:42.868 May have multiple subsystem ports: Yes 00:13:42.868 May have multiple controllers: Yes 00:13:42.868 Associated with SR-IOV VF: No 00:13:42.868 Max Data Transfer Size: 131072 00:13:42.868 Max Number of Namespaces: 32 00:13:42.868 Max Number of I/O Queues: 127 00:13:42.868 NVMe Specification Version (VS): 1.3 00:13:42.868 NVMe Specification Version (Identify): 1.3 00:13:42.868 Maximum Queue Entries: 256 00:13:42.868 Contiguous Queues Required: Yes 00:13:42.868 Arbitration Mechanisms Supported 00:13:42.868 Weighted Round Robin: Not Supported 00:13:42.868 Vendor Specific: Not Supported 00:13:42.868 Reset Timeout: 15000 ms 00:13:42.868 Doorbell Stride: 4 bytes 00:13:42.868 NVM Subsystem Reset: Not Supported 00:13:42.868 Command Sets Supported 00:13:42.868 NVM Command Set: Supported 00:13:42.868 Boot Partition: Not Supported 00:13:42.868 Memory Page Size Minimum: 4096 bytes 00:13:42.868 Memory Page Size Maximum: 4096 bytes 00:13:42.868 Persistent Memory Region: Not Supported 00:13:42.868 Optional Asynchronous Events Supported 00:13:42.868 Namespace Attribute Notices: Supported 00:13:42.868 Firmware Activation Notices: Not Supported 00:13:42.868 ANA Change Notices: Not Supported 00:13:42.868 PLE Aggregate Log Change Notices: Not Supported 00:13:42.868 LBA Status Info Alert Notices: Not Supported 00:13:42.868 EGE Aggregate Log Change Notices: Not Supported 00:13:42.868 Normal NVM Subsystem Shutdown event: Not Supported 00:13:42.868 Zone Descriptor Change Notices: Not Supported 00:13:42.868 Discovery Log Change Notices: Not Supported 00:13:42.868 Controller Attributes 00:13:42.868 128-bit Host Identifier: Supported 00:13:42.868 Non-Operational Permissive Mode: Not Supported 00:13:42.868 NVM Sets: Not Supported 00:13:42.868 Read Recovery Levels: Not Supported 00:13:42.868 Endurance Groups: Not Supported 00:13:42.868 Predictable Latency Mode: Not Supported 00:13:42.868 Traffic Based Keep ALive: Not Supported 00:13:42.868 Namespace Granularity: Not Supported 00:13:42.868 SQ Associations: Not Supported 00:13:42.868 UUID List: Not Supported 00:13:42.868 Multi-Domain Subsystem: Not Supported 00:13:42.868 Fixed Capacity Management: Not Supported 00:13:42.868 Variable Capacity Management: Not Supported 00:13:42.868 Delete Endurance Group: Not Supported 00:13:42.868 Delete NVM Set: Not Supported 00:13:42.868 Extended LBA Formats Supported: Not Supported 00:13:42.868 Flexible Data Placement Supported: Not Supported 00:13:42.868 00:13:42.868 Controller Memory Buffer Support 00:13:42.868 ================================ 00:13:42.868 Supported: No 00:13:42.868 00:13:42.868 Persistent Memory Region Support 00:13:42.868 ================================ 00:13:42.868 Supported: No 00:13:42.868 00:13:42.868 Admin Command Set Attributes 00:13:42.868 ============================ 00:13:42.868 Security Send/Receive: Not Supported 00:13:42.868 Format NVM: Not Supported 00:13:42.868 Firmware Activate/Download: Not Supported 00:13:42.868 Namespace Management: Not Supported 00:13:42.868 Device Self-Test: Not Supported 00:13:42.868 Directives: Not Supported 00:13:42.868 NVMe-MI: Not Supported 00:13:42.868 Virtualization Management: Not Supported 00:13:42.868 Doorbell Buffer Config: Not Supported 00:13:42.868 Get LBA Status Capability: Not Supported 00:13:42.868 Command & Feature Lockdown Capability: Not Supported 00:13:42.868 Abort Command Limit: 4 00:13:42.868 Async Event Request Limit: 4 00:13:42.868 Number of Firmware Slots: N/A 00:13:42.868 Firmware Slot 1 Read-Only: N/A 00:13:42.868 Firmware Activation Without Reset: N/A 00:13:42.868 Multiple Update Detection Support: N/A 00:13:42.868 Firmware Update Granularity: No Information Provided 00:13:42.868 Per-Namespace SMART Log: No 00:13:42.868 Asymmetric Namespace Access Log Page: Not Supported 00:13:42.868 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:42.868 Command Effects Log Page: Supported 00:13:42.868 Get Log Page Extended Data: Supported 00:13:42.868 Telemetry Log Pages: Not Supported 00:13:42.868 Persistent Event Log Pages: Not Supported 00:13:42.868 Supported Log Pages Log Page: May Support 00:13:42.868 Commands Supported & Effects Log Page: Not Supported 00:13:42.868 Feature Identifiers & Effects Log Page:May Support 00:13:42.868 NVMe-MI Commands & Effects Log Page: May Support 00:13:42.868 Data Area 4 for Telemetry Log: Not Supported 00:13:42.868 Error Log Page Entries Supported: 128 00:13:42.868 Keep Alive: Supported 00:13:42.868 Keep Alive Granularity: 10000 ms 00:13:42.868 00:13:42.868 NVM Command Set Attributes 00:13:42.868 ========================== 00:13:42.868 Submission Queue Entry Size 00:13:42.868 Max: 64 00:13:42.869 Min: 64 00:13:42.869 Completion Queue Entry Size 00:13:42.869 Max: 16 00:13:42.869 Min: 16 00:13:42.869 Number of Namespaces: 32 00:13:42.869 Compare Command: Supported 00:13:42.869 Write Uncorrectable Command: Not Supported 00:13:42.869 Dataset Management Command: Supported 00:13:42.869 Write Zeroes Command: Supported 00:13:42.869 Set Features Save Field: Not Supported 00:13:42.869 Reservations: Not Supported 00:13:42.869 Timestamp: Not Supported 00:13:42.869 Copy: Supported 00:13:42.869 Volatile Write Cache: Present 00:13:42.869 Atomic Write Unit (Normal): 1 00:13:42.869 Atomic Write Unit (PFail): 1 00:13:42.869 Atomic Compare & Write Unit: 1 00:13:42.869 Fused Compare & Write: Supported 00:13:42.869 Scatter-Gather List 00:13:42.869 SGL Command Set: Supported (Dword aligned) 00:13:42.869 SGL Keyed: Not Supported 00:13:42.869 SGL Bit Bucket Descriptor: Not Supported 00:13:42.869 SGL Metadata Pointer: Not Supported 00:13:42.869 Oversized SGL: Not Supported 00:13:42.869 SGL Metadata Address: Not Supported 00:13:42.869 SGL Offset: Not Supported 00:13:42.869 Transport SGL Data Block: Not Supported 00:13:42.869 Replay Protected Memory Block: Not Supported 00:13:42.869 00:13:42.869 Firmware Slot Information 00:13:42.869 ========================= 00:13:42.869 Active slot: 1 00:13:42.869 Slot 1 Firmware Revision: 24.09 00:13:42.869 00:13:42.869 00:13:42.869 Commands Supported and Effects 00:13:42.869 ============================== 00:13:42.869 Admin Commands 00:13:42.869 -------------- 00:13:42.869 Get Log Page (02h): Supported 00:13:42.869 Identify (06h): Supported 00:13:42.869 Abort (08h): Supported 00:13:42.869 Set Features (09h): Supported 00:13:42.869 Get Features (0Ah): Supported 00:13:42.869 Asynchronous Event Request (0Ch): Supported 00:13:42.869 Keep Alive (18h): Supported 00:13:42.869 I/O Commands 00:13:42.869 ------------ 00:13:42.869 Flush (00h): Supported LBA-Change 00:13:42.869 Write (01h): Supported LBA-Change 00:13:42.869 Read (02h): Supported 00:13:42.869 Compare (05h): Supported 00:13:42.869 Write Zeroes (08h): Supported LBA-Change 00:13:42.869 Dataset Management (09h): Supported LBA-Change 00:13:42.869 Copy (19h): Supported LBA-Change 00:13:42.869 00:13:42.869 Error Log 00:13:42.869 ========= 00:13:42.869 00:13:42.869 Arbitration 00:13:42.869 =========== 00:13:42.869 Arbitration Burst: 1 00:13:42.869 00:13:42.869 Power Management 00:13:42.869 ================ 00:13:42.869 Number of Power States: 1 00:13:42.869 Current Power State: Power State #0 00:13:42.869 Power State #0: 00:13:42.869 Max Power: 0.00 W 00:13:42.869 Non-Operational State: Operational 00:13:42.869 Entry Latency: Not Reported 00:13:42.869 Exit Latency: Not Reported 00:13:42.869 Relative Read Throughput: 0 00:13:42.869 Relative Read Latency: 0 00:13:42.869 Relative Write Throughput: 0 00:13:42.869 Relative Write Latency: 0 00:13:42.869 Idle Power: Not Reported 00:13:42.869 Active Power: Not Reported 00:13:42.869 Non-Operational Permissive Mode: Not Supported 00:13:42.869 00:13:42.869 Health Information 00:13:42.869 ================== 00:13:42.869 Critical Warnings: 00:13:42.869 Available Spare Space: OK 00:13:42.869 Temperature: OK 00:13:42.869 Device Reliability: OK 00:13:42.869 Read Only: No 00:13:42.869 Volatile Memory Backup: OK 00:13:42.869 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:42.869 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:42.869 Available Spare: 0% 00:13:42.869 Available Sp[2024-07-22 12:08:50.689148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:42.869 [2024-07-22 12:08:50.689165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:42.869 [2024-07-22 12:08:50.689208] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:42.869 [2024-07-22 12:08:50.689226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.869 [2024-07-22 12:08:50.689236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.869 [2024-07-22 12:08:50.689246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.869 [2024-07-22 12:08:50.689255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.869 [2024-07-22 12:08:50.691625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:42.869 [2024-07-22 12:08:50.691649] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:42.869 [2024-07-22 12:08:50.691711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.869 [2024-07-22 12:08:50.691782] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:42.869 [2024-07-22 12:08:50.691796] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:42.869 [2024-07-22 12:08:50.692721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:42.869 [2024-07-22 12:08:50.692744] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:42.869 [2024-07-22 12:08:50.692801] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:42.869 [2024-07-22 12:08:50.696624] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:42.869 are Threshold: 0% 00:13:42.869 Life Percentage Used: 0% 00:13:42.869 Data Units Read: 0 00:13:42.869 Data Units Written: 0 00:13:42.869 Host Read Commands: 0 00:13:42.869 Host Write Commands: 0 00:13:42.869 Controller Busy Time: 0 minutes 00:13:42.869 Power Cycles: 0 00:13:42.869 Power On Hours: 0 hours 00:13:42.869 Unsafe Shutdowns: 0 00:13:42.869 Unrecoverable Media Errors: 0 00:13:42.869 Lifetime Error Log Entries: 0 00:13:42.869 Warning Temperature Time: 0 minutes 00:13:42.869 Critical Temperature Time: 0 minutes 00:13:42.869 00:13:42.869 Number of Queues 00:13:42.869 ================ 00:13:42.869 Number of I/O Submission Queues: 127 00:13:42.869 Number of I/O Completion Queues: 127 00:13:42.869 00:13:42.869 Active Namespaces 00:13:42.869 ================= 00:13:42.869 Namespace ID:1 00:13:42.869 Error Recovery Timeout: Unlimited 00:13:42.869 Command Set Identifier: NVM (00h) 00:13:42.869 Deallocate: Supported 00:13:42.869 Deallocated/Unwritten Error: Not Supported 00:13:42.869 Deallocated Read Value: Unknown 00:13:42.869 Deallocate in Write Zeroes: Not Supported 00:13:42.869 Deallocated Guard Field: 0xFFFF 00:13:42.869 Flush: Supported 00:13:42.869 Reservation: Supported 00:13:42.869 Namespace Sharing Capabilities: Multiple Controllers 00:13:42.869 Size (in LBAs): 131072 (0GiB) 00:13:42.869 Capacity (in LBAs): 131072 (0GiB) 00:13:42.869 Utilization (in LBAs): 131072 (0GiB) 00:13:42.869 NGUID: 2AEC63D427584F9DB4D41F548960B092 00:13:42.869 UUID: 2aec63d4-2758-4f9d-b4d4-1f548960b092 00:13:42.869 Thin Provisioning: Not Supported 00:13:42.869 Per-NS Atomic Units: Yes 00:13:42.869 Atomic Boundary Size (Normal): 0 00:13:42.869 Atomic Boundary Size (PFail): 0 00:13:42.869 Atomic Boundary Offset: 0 00:13:42.869 Maximum Single Source Range Length: 65535 00:13:42.869 Maximum Copy Length: 65535 00:13:42.869 Maximum Source Range Count: 1 00:13:42.869 NGUID/EUI64 Never Reused: No 00:13:42.869 Namespace Write Protected: No 00:13:42.869 Number of LBA Formats: 1 00:13:42.869 Current LBA Format: LBA Format #00 00:13:42.869 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:42.869 00:13:42.869 12:08:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:42.869 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.127 [2024-07-22 12:08:50.928521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.405 Initializing NVMe Controllers 00:13:48.405 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:48.405 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:48.405 Initialization complete. Launching workers. 00:13:48.405 ======================================================== 00:13:48.405 Latency(us) 00:13:48.405 Device Information : IOPS MiB/s Average min max 00:13:48.405 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34965.15 136.58 3660.11 1169.19 7415.71 00:13:48.405 ======================================================== 00:13:48.405 Total : 34965.15 136.58 3660.11 1169.19 7415.71 00:13:48.405 00:13:48.405 [2024-07-22 12:08:55.950893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.405 12:08:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:48.405 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.405 [2024-07-22 12:08:56.187085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:53.669 Initializing NVMe Controllers 00:13:53.669 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:53.669 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:53.669 Initialization complete. Launching workers. 00:13:53.669 ======================================================== 00:13:53.669 Latency(us) 00:13:53.669 Device Information : IOPS MiB/s Average min max 00:13:53.669 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16012.57 62.55 7998.96 5971.34 15962.81 00:13:53.669 ======================================================== 00:13:53.669 Total : 16012.57 62.55 7998.96 5971.34 15962.81 00:13:53.669 00:13:53.669 [2024-07-22 12:09:01.222351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:53.669 12:09:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:53.669 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.669 [2024-07-22 12:09:01.437424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:58.929 [2024-07-22 12:09:06.515986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:58.929 Initializing NVMe Controllers 00:13:58.929 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:58.929 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:58.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:58.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:58.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:58.929 Initialization complete. Launching workers. 00:13:58.929 Starting thread on core 2 00:13:58.929 Starting thread on core 3 00:13:58.929 Starting thread on core 1 00:13:58.929 12:09:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:58.929 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.929 [2024-07-22 12:09:06.825063] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.204 [2024-07-22 12:09:09.881023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.204 Initializing NVMe Controllers 00:14:02.204 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.204 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:02.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:02.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:02.204 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:02.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:02.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:02.204 Initialization complete. Launching workers. 00:14:02.204 Starting thread on core 1 with urgent priority queue 00:14:02.204 Starting thread on core 2 with urgent priority queue 00:14:02.204 Starting thread on core 3 with urgent priority queue 00:14:02.204 Starting thread on core 0 with urgent priority queue 00:14:02.204 SPDK bdev Controller (SPDK1 ) core 0: 5570.67 IO/s 17.95 secs/100000 ios 00:14:02.204 SPDK bdev Controller (SPDK1 ) core 1: 5674.67 IO/s 17.62 secs/100000 ios 00:14:02.204 SPDK bdev Controller (SPDK1 ) core 2: 5616.67 IO/s 17.80 secs/100000 ios 00:14:02.204 SPDK bdev Controller (SPDK1 ) core 3: 5514.67 IO/s 18.13 secs/100000 ios 00:14:02.204 ======================================================== 00:14:02.204 00:14:02.204 12:09:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:02.204 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.461 [2024-07-22 12:09:10.193225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.461 Initializing NVMe Controllers 00:14:02.461 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.461 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.461 Namespace ID: 1 size: 0GB 00:14:02.461 Initialization complete. 00:14:02.461 INFO: using host memory buffer for IO 00:14:02.461 Hello world! 00:14:02.461 [2024-07-22 12:09:10.229951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.461 12:09:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:02.461 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.717 [2024-07-22 12:09:10.523076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.649 Initializing NVMe Controllers 00:14:03.649 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.649 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.649 Initialization complete. Launching workers. 00:14:03.649 submit (in ns) avg, min, max = 7469.9, 3472.2, 4019334.4 00:14:03.650 complete (in ns) avg, min, max = 23883.6, 2063.3, 5995952.2 00:14:03.650 00:14:03.650 Submit histogram 00:14:03.650 ================ 00:14:03.650 Range in us Cumulative Count 00:14:03.650 3.461 - 3.484: 0.0222% ( 3) 00:14:03.650 3.484 - 3.508: 0.3622% ( 46) 00:14:03.650 3.508 - 3.532: 1.5523% ( 161) 00:14:03.650 3.532 - 3.556: 4.1987% ( 358) 00:14:03.650 3.556 - 3.579: 11.1620% ( 942) 00:14:03.650 3.579 - 3.603: 20.0030% ( 1196) 00:14:03.650 3.603 - 3.627: 30.6106% ( 1435) 00:14:03.650 3.627 - 3.650: 40.1242% ( 1287) 00:14:03.650 3.650 - 3.674: 48.4329% ( 1124) 00:14:03.650 3.674 - 3.698: 54.1913% ( 779) 00:14:03.650 3.698 - 3.721: 59.1514% ( 671) 00:14:03.650 3.721 - 3.745: 62.6404% ( 472) 00:14:03.650 3.745 - 3.769: 66.0925% ( 467) 00:14:03.650 3.769 - 3.793: 69.3746% ( 444) 00:14:03.650 3.793 - 3.816: 72.7011% ( 450) 00:14:03.650 3.816 - 3.840: 76.3749% ( 497) 00:14:03.650 3.840 - 3.864: 80.3888% ( 543) 00:14:03.650 3.864 - 3.887: 83.7596% ( 456) 00:14:03.650 3.887 - 3.911: 86.0585% ( 311) 00:14:03.650 3.911 - 3.935: 88.1727% ( 286) 00:14:03.650 3.935 - 3.959: 89.4959% ( 179) 00:14:03.650 3.959 - 3.982: 90.8634% ( 185) 00:14:03.650 3.982 - 4.006: 91.9352% ( 145) 00:14:03.650 4.006 - 4.030: 92.8001% ( 117) 00:14:03.650 4.030 - 4.053: 93.6650% ( 117) 00:14:03.650 4.053 - 4.077: 94.4633% ( 108) 00:14:03.650 4.077 - 4.101: 95.1286% ( 90) 00:14:03.650 4.101 - 4.124: 95.5943% ( 63) 00:14:03.650 4.124 - 4.148: 96.0083% ( 56) 00:14:03.650 4.148 - 4.172: 96.3114% ( 41) 00:14:03.650 4.172 - 4.196: 96.5553% ( 33) 00:14:03.650 4.196 - 4.219: 96.6957% ( 19) 00:14:03.650 4.219 - 4.243: 96.8288% ( 18) 00:14:03.650 4.243 - 4.267: 96.9397% ( 15) 00:14:03.650 4.267 - 4.290: 97.0210% ( 11) 00:14:03.650 4.290 - 4.314: 97.0801% ( 8) 00:14:03.650 4.314 - 4.338: 97.1688% ( 12) 00:14:03.650 4.338 - 4.361: 97.2354% ( 9) 00:14:03.650 4.361 - 4.385: 97.2649% ( 4) 00:14:03.650 4.385 - 4.409: 97.3019% ( 5) 00:14:03.650 4.409 - 4.433: 97.3241% ( 3) 00:14:03.650 4.433 - 4.456: 97.3536% ( 4) 00:14:03.650 4.480 - 4.504: 97.3758% ( 3) 00:14:03.650 4.504 - 4.527: 97.3906% ( 2) 00:14:03.650 4.527 - 4.551: 97.3980% ( 1) 00:14:03.650 4.575 - 4.599: 97.4054% ( 1) 00:14:03.650 4.599 - 4.622: 97.4128% ( 1) 00:14:03.650 4.622 - 4.646: 97.4202% ( 1) 00:14:03.650 4.646 - 4.670: 97.4276% ( 1) 00:14:03.650 4.670 - 4.693: 97.4497% ( 3) 00:14:03.650 4.693 - 4.717: 97.4571% ( 1) 00:14:03.650 4.717 - 4.741: 97.4719% ( 2) 00:14:03.650 4.741 - 4.764: 97.5089% ( 5) 00:14:03.650 4.764 - 4.788: 97.5532% ( 6) 00:14:03.650 4.788 - 4.812: 97.5976% ( 6) 00:14:03.650 4.812 - 4.836: 97.6493% ( 7) 00:14:03.650 4.836 - 4.859: 97.6937% ( 6) 00:14:03.650 4.859 - 4.883: 97.7306% ( 5) 00:14:03.650 4.883 - 4.907: 97.7676% ( 5) 00:14:03.650 4.907 - 4.930: 97.7898% ( 3) 00:14:03.650 4.930 - 4.954: 97.8267% ( 5) 00:14:03.650 4.954 - 4.978: 97.8933% ( 9) 00:14:03.650 4.978 - 5.001: 97.9154% ( 3) 00:14:03.650 5.001 - 5.025: 97.9746% ( 8) 00:14:03.650 5.025 - 5.049: 97.9967% ( 3) 00:14:03.650 5.049 - 5.073: 98.0263% ( 4) 00:14:03.650 5.073 - 5.096: 98.0411% ( 2) 00:14:03.650 5.096 - 5.120: 98.0485% ( 1) 00:14:03.650 5.120 - 5.144: 98.0633% ( 2) 00:14:03.650 5.144 - 5.167: 98.0855% ( 3) 00:14:03.650 5.167 - 5.191: 98.0928% ( 1) 00:14:03.650 5.191 - 5.215: 98.1150% ( 3) 00:14:03.650 5.215 - 5.239: 98.1372% ( 3) 00:14:03.650 5.239 - 5.262: 98.1594% ( 3) 00:14:03.650 5.286 - 5.310: 98.1815% ( 3) 00:14:03.650 5.333 - 5.357: 98.1889% ( 1) 00:14:03.650 5.404 - 5.428: 98.1963% ( 1) 00:14:03.650 5.736 - 5.760: 98.2037% ( 1) 00:14:03.650 5.760 - 5.784: 98.2111% ( 1) 00:14:03.650 5.926 - 5.950: 98.2259% ( 2) 00:14:03.650 6.044 - 6.068: 98.2407% ( 2) 00:14:03.650 6.163 - 6.210: 98.2481% ( 1) 00:14:03.650 6.210 - 6.258: 98.2555% ( 1) 00:14:03.650 6.258 - 6.305: 98.2629% ( 1) 00:14:03.650 6.400 - 6.447: 98.2703% ( 1) 00:14:03.650 6.447 - 6.495: 98.2776% ( 1) 00:14:03.650 6.495 - 6.542: 98.2924% ( 2) 00:14:03.650 6.542 - 6.590: 98.2998% ( 1) 00:14:03.650 6.590 - 6.637: 98.3072% ( 1) 00:14:03.650 6.732 - 6.779: 98.3146% ( 1) 00:14:03.650 6.827 - 6.874: 98.3220% ( 1) 00:14:03.650 6.921 - 6.969: 98.3442% ( 3) 00:14:03.650 7.064 - 7.111: 98.3516% ( 1) 00:14:03.650 7.206 - 7.253: 98.3590% ( 1) 00:14:03.650 7.253 - 7.301: 98.3737% ( 2) 00:14:03.650 7.301 - 7.348: 98.3811% ( 1) 00:14:03.650 7.348 - 7.396: 98.3885% ( 1) 00:14:03.650 7.396 - 7.443: 98.3959% ( 1) 00:14:03.650 7.443 - 7.490: 98.4033% ( 1) 00:14:03.650 7.490 - 7.538: 98.4107% ( 1) 00:14:03.650 7.538 - 7.585: 98.4181% ( 1) 00:14:03.650 7.727 - 7.775: 98.4255% ( 1) 00:14:03.650 7.775 - 7.822: 98.4329% ( 1) 00:14:03.650 7.822 - 7.870: 98.4477% ( 2) 00:14:03.650 7.870 - 7.917: 98.4551% ( 1) 00:14:03.650 7.964 - 8.012: 98.4698% ( 2) 00:14:03.650 8.059 - 8.107: 98.4920% ( 3) 00:14:03.650 8.107 - 8.154: 98.5216% ( 4) 00:14:03.650 8.154 - 8.201: 98.5290% ( 1) 00:14:03.650 8.296 - 8.344: 98.5512% ( 3) 00:14:03.650 8.344 - 8.391: 98.5733% ( 3) 00:14:03.650 8.391 - 8.439: 98.5807% ( 1) 00:14:03.650 8.581 - 8.628: 98.5955% ( 2) 00:14:03.650 8.628 - 8.676: 98.6029% ( 1) 00:14:03.650 8.770 - 8.818: 98.6103% ( 1) 00:14:03.650 8.913 - 8.960: 98.6177% ( 1) 00:14:03.650 9.197 - 9.244: 98.6251% ( 1) 00:14:03.650 9.244 - 9.292: 98.6325% ( 1) 00:14:03.650 9.481 - 9.529: 98.6399% ( 1) 00:14:03.650 9.529 - 9.576: 98.6473% ( 1) 00:14:03.650 9.766 - 9.813: 98.6546% ( 1) 00:14:03.650 9.813 - 9.861: 98.6620% ( 1) 00:14:03.650 10.003 - 10.050: 98.6694% ( 1) 00:14:03.650 10.240 - 10.287: 98.6768% ( 1) 00:14:03.650 10.477 - 10.524: 98.6842% ( 1) 00:14:03.650 10.809 - 10.856: 98.6916% ( 1) 00:14:03.650 10.856 - 10.904: 98.6990% ( 1) 00:14:03.650 10.999 - 11.046: 98.7064% ( 1) 00:14:03.650 11.141 - 11.188: 98.7212% ( 2) 00:14:03.650 11.188 - 11.236: 98.7286% ( 1) 00:14:03.650 11.236 - 11.283: 98.7360% ( 1) 00:14:03.650 11.615 - 11.662: 98.7433% ( 1) 00:14:03.650 11.710 - 11.757: 98.7507% ( 1) 00:14:03.650 11.852 - 11.899: 98.7655% ( 2) 00:14:03.650 11.994 - 12.041: 98.7729% ( 1) 00:14:03.650 12.231 - 12.326: 98.7951% ( 3) 00:14:03.650 12.326 - 12.421: 98.8099% ( 2) 00:14:03.650 12.610 - 12.705: 98.8173% ( 1) 00:14:03.650 12.705 - 12.800: 98.8247% ( 1) 00:14:03.650 12.800 - 12.895: 98.8321% ( 1) 00:14:03.650 12.895 - 12.990: 98.8468% ( 2) 00:14:03.650 12.990 - 13.084: 98.8616% ( 2) 00:14:03.650 13.274 - 13.369: 98.8690% ( 1) 00:14:03.650 13.369 - 13.464: 98.8912% ( 3) 00:14:03.650 13.464 - 13.559: 98.8986% ( 1) 00:14:03.650 13.559 - 13.653: 98.9134% ( 2) 00:14:03.650 14.886 - 14.981: 98.9208% ( 1) 00:14:03.650 14.981 - 15.076: 98.9281% ( 1) 00:14:03.650 17.256 - 17.351: 98.9429% ( 2) 00:14:03.650 17.351 - 17.446: 98.9651% ( 3) 00:14:03.650 17.446 - 17.541: 98.9799% ( 2) 00:14:03.650 17.541 - 17.636: 99.0021% ( 3) 00:14:03.650 17.636 - 17.730: 99.0390% ( 5) 00:14:03.650 17.730 - 17.825: 99.1277% ( 12) 00:14:03.650 17.825 - 17.920: 99.1647% ( 5) 00:14:03.650 17.920 - 18.015: 99.2312% ( 9) 00:14:03.650 18.015 - 18.110: 99.2904% ( 8) 00:14:03.650 18.110 - 18.204: 99.3569% ( 9) 00:14:03.650 18.204 - 18.299: 99.4160% ( 8) 00:14:03.650 18.299 - 18.394: 99.4752% ( 8) 00:14:03.650 18.394 - 18.489: 99.5565% ( 11) 00:14:03.650 18.489 - 18.584: 99.6304% ( 10) 00:14:03.650 18.584 - 18.679: 99.6821% ( 7) 00:14:03.650 18.679 - 18.773: 99.7043% ( 3) 00:14:03.650 18.773 - 18.868: 99.7561% ( 7) 00:14:03.650 18.868 - 18.963: 99.7782% ( 3) 00:14:03.650 18.963 - 19.058: 99.7856% ( 1) 00:14:03.650 19.058 - 19.153: 99.8004% ( 2) 00:14:03.650 19.153 - 19.247: 99.8152% ( 2) 00:14:03.650 19.247 - 19.342: 99.8226% ( 1) 00:14:03.650 19.342 - 19.437: 99.8300% ( 1) 00:14:03.650 19.437 - 19.532: 99.8522% ( 3) 00:14:03.650 19.532 - 19.627: 99.8596% ( 1) 00:14:03.650 20.290 - 20.385: 99.8669% ( 1) 00:14:03.650 21.902 - 21.997: 99.8743% ( 1) 00:14:03.650 22.376 - 22.471: 99.8817% ( 1) 00:14:03.650 26.548 - 26.738: 99.8891% ( 1) 00:14:03.650 27.496 - 27.686: 99.8965% ( 1) 00:14:03.650 27.686 - 27.876: 99.9039% ( 1) 00:14:03.650 29.203 - 29.393: 99.9113% ( 1) 00:14:03.650 3980.705 - 4004.978: 99.9852% ( 10) 00:14:03.650 4004.978 - 4029.250: 100.0000% ( 2) 00:14:03.650 00:14:03.650 Complete histogram 00:14:03.650 ================== 00:14:03.650 Range in us Cumulative Count 00:14:03.650 2.062 - 2.074: 1.7519% ( 237) 00:14:03.650 2.074 - 2.086: 34.4027% ( 4417) 00:14:03.650 2.086 - 2.098: 43.8350% ( 1276) 00:14:03.650 2.098 - 2.110: 46.9692% ( 424) 00:14:03.650 2.110 - 2.121: 58.2939% ( 1532) 00:14:03.650 2.121 - 2.133: 60.8072% ( 340) 00:14:03.650 2.133 - 2.145: 65.6342% ( 653) 00:14:03.650 2.145 - 2.157: 75.5840% ( 1346) 00:14:03.650 2.157 - 2.169: 77.3729% ( 242) 00:14:03.650 2.169 - 2.181: 79.2135% ( 249) 00:14:03.650 2.181 - 2.193: 82.1112% ( 392) 00:14:03.650 2.193 - 2.204: 82.8800% ( 104) 00:14:03.650 2.204 - 2.216: 84.2105% ( 180) 00:14:03.650 2.216 - 2.228: 88.7640% ( 616) 00:14:03.650 2.228 - 2.240: 91.0482% ( 309) 00:14:03.650 2.240 - 2.252: 91.9131% ( 117) 00:14:03.650 2.252 - 2.264: 93.3250% ( 191) 00:14:03.650 2.264 - 2.276: 93.7020% ( 51) 00:14:03.650 2.276 - 2.287: 94.0420% ( 46) 00:14:03.650 2.287 - 2.299: 94.6112% ( 77) 00:14:03.650 2.299 - 2.311: 95.2469% ( 86) 00:14:03.650 2.311 - 2.323: 95.4908% ( 33) 00:14:03.650 2.323 - 2.335: 95.6017% ( 15) 00:14:03.650 2.335 - 2.347: 95.6830% ( 11) 00:14:03.650 2.347 - 2.359: 95.7791% ( 13) 00:14:03.650 2.359 - 2.370: 96.0157% ( 32) 00:14:03.650 2.370 - 2.382: 96.2818% ( 36) 00:14:03.650 2.382 - 2.394: 96.6218% ( 46) 00:14:03.650 2.394 - 2.406: 96.8436% ( 30) 00:14:03.650 2.406 - 2.418: 97.0210% ( 24) 00:14:03.650 2.418 - 2.430: 97.3315% ( 42) 00:14:03.650 2.430 - 2.441: 97.4867% ( 21) 00:14:03.650 2.441 - 2.453: 97.6567% ( 23) 00:14:03.650 2.453 - 2.465: 97.7528% ( 13) 00:14:03.650 2.465 - 2.477: 97.8489% ( 13) 00:14:03.650 2.477 - 2.489: 97.9820% ( 18) 00:14:03.650 2.489 - 2.501: 98.1076% ( 17) 00:14:03.650 2.501 - 2.513: 98.1889% ( 11) 00:14:03.650 2.513 - 2.524: 98.2555% ( 9) 00:14:03.650 2.524 - 2.536: 98.3220% ( 9) 00:14:03.650 2.536 - 2.548: 98.3442% ( 3) 00:14:03.650 2.548 - 2.560: 98.3811% ( 5) 00:14:03.650 2.560 - 2.572: 98.4033% ( 3) 00:14:03.651 2.572 - 2.584: 98.4107% ( 1) 00:14:03.651 2.584 - 2.596: 98.4255% ( 2) 00:14:03.651 2.596 - 2.607: 98.4329% ( 1) 00:14:03.651 2.607 - 2.619: 98.4477% ( 2) 00:14:03.651 2.619 - 2.631: 98.4551% ( 1) 00:14:03.651 2.631 - 2.643: 98.4624% ( 1) 00:14:03.651 2.702 - 2.714: 98.4698% ( 1) 00:14:03.651 2.726 - 2.738: 98.4772% ( 1) 00:14:03.651 2.761 - 2.773: 98.4846% ( 1) 00:14:03.651 2.856 - 2.868: 98.4920% ( 1) 00:14:03.651 2.904 - 2.916: 98.4994% ( 1) 00:14:03.651 2.963 - 2.975: 98.5068% ( 1) 00:14:03.651 3.010 - 3.022: 98.5142% ( 1) 00:14:03.651 3.319 - 3.342: 98.5216% ( 1) 00:14:03.651 3.390 - 3.413: 98.5290% ( 1) 00:14:03.651 3.413 - 3.437: 98.5364% ( 1) 00:14:03.651 3.437 - 3.461: 98.5512% ( 2) 00:14:03.651 3.461 - 3.484: 98.5585% ( 1) 00:14:03.651 3.484 - 3.508: 98.5733% ( 2) 00:14:03.651 3.508 - 3.532: 98.5881% ( 2) 00:14:03.651 3.532 - 3.556: 98.6177% ( 4) 00:14:03.651 3.556 - 3.579: 98.6399% ( 3) 00:14:03.651 3.579 - 3.603: 98.6473% ( 1) 00:14:03.651 3.603 - 3.627: 98.6546% ( 1) 00:14:03.651 3.721 - 3.745: 98.6620% ( 1) 00:14:03.651 3.745 - 3.769: 98.6694% ( 1) 00:14:03.651 3.769 - 3.793: 9[2024-07-22 12:09:11.543288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:03.908 8.6768% ( 1) 00:14:03.908 3.793 - 3.816: 98.6842% ( 1) 00:14:03.908 3.816 - 3.840: 98.6990% ( 2) 00:14:03.908 3.840 - 3.864: 98.7064% ( 1) 00:14:03.908 3.864 - 3.887: 98.7138% ( 1) 00:14:03.908 3.911 - 3.935: 98.7212% ( 1) 00:14:03.908 3.959 - 3.982: 98.7286% ( 1) 00:14:03.908 3.982 - 4.006: 98.7360% ( 1) 00:14:03.908 4.030 - 4.053: 98.7433% ( 1) 00:14:03.908 4.053 - 4.077: 98.7507% ( 1) 00:14:03.908 4.172 - 4.196: 98.7581% ( 1) 00:14:03.908 4.409 - 4.433: 98.7655% ( 1) 00:14:03.908 5.381 - 5.404: 98.7729% ( 1) 00:14:03.908 5.428 - 5.452: 98.7803% ( 1) 00:14:03.908 5.476 - 5.499: 98.7877% ( 1) 00:14:03.908 5.689 - 5.713: 98.7951% ( 1) 00:14:03.908 5.713 - 5.736: 98.8173% ( 3) 00:14:03.908 5.760 - 5.784: 98.8247% ( 1) 00:14:03.908 5.879 - 5.902: 98.8321% ( 1) 00:14:03.908 6.068 - 6.116: 98.8394% ( 1) 00:14:03.908 6.116 - 6.163: 98.8468% ( 1) 00:14:03.908 6.305 - 6.353: 98.8542% ( 1) 00:14:03.908 6.353 - 6.400: 98.8690% ( 2) 00:14:03.908 6.400 - 6.447: 98.8764% ( 1) 00:14:03.908 6.447 - 6.495: 98.8838% ( 1) 00:14:03.908 6.590 - 6.637: 98.8912% ( 1) 00:14:03.908 6.827 - 6.874: 98.8986% ( 1) 00:14:03.908 7.206 - 7.253: 98.9060% ( 1) 00:14:03.908 8.676 - 8.723: 98.9134% ( 1) 00:14:03.908 9.576 - 9.624: 98.9208% ( 1) 00:14:03.908 10.951 - 10.999: 98.9281% ( 1) 00:14:03.908 15.360 - 15.455: 98.9429% ( 2) 00:14:03.908 15.644 - 15.739: 98.9503% ( 1) 00:14:03.908 15.739 - 15.834: 98.9725% ( 3) 00:14:03.908 15.834 - 15.929: 98.9947% ( 3) 00:14:03.908 15.929 - 16.024: 99.0095% ( 2) 00:14:03.908 16.024 - 16.119: 99.0538% ( 6) 00:14:03.908 16.119 - 16.213: 99.0908% ( 5) 00:14:03.908 16.213 - 16.308: 99.1499% ( 8) 00:14:03.908 16.308 - 16.403: 99.1721% ( 3) 00:14:03.908 16.403 - 16.498: 99.1869% ( 2) 00:14:03.908 16.498 - 16.593: 99.2534% ( 9) 00:14:03.908 16.593 - 16.687: 99.2978% ( 6) 00:14:03.908 16.687 - 16.782: 99.3199% ( 3) 00:14:03.908 16.782 - 16.877: 99.3273% ( 1) 00:14:03.908 16.877 - 16.972: 99.3421% ( 2) 00:14:03.908 16.972 - 17.067: 99.3791% ( 5) 00:14:03.908 17.067 - 17.161: 99.3938% ( 2) 00:14:03.908 17.161 - 17.256: 99.4012% ( 1) 00:14:03.908 17.256 - 17.351: 99.4086% ( 1) 00:14:03.908 17.351 - 17.446: 99.4160% ( 1) 00:14:03.908 17.541 - 17.636: 99.4234% ( 1) 00:14:03.908 17.636 - 17.730: 99.4308% ( 1) 00:14:03.908 17.920 - 18.015: 99.4382% ( 1) 00:14:03.908 18.015 - 18.110: 99.4456% ( 1) 00:14:03.908 18.110 - 18.204: 99.4530% ( 1) 00:14:03.908 18.394 - 18.489: 99.4604% ( 1) 00:14:03.908 2014.625 - 2026.761: 99.4678% ( 1) 00:14:03.908 3034.074 - 3046.210: 99.4752% ( 1) 00:14:03.908 3980.705 - 4004.978: 99.8817% ( 55) 00:14:03.908 4004.978 - 4029.250: 99.9778% ( 13) 00:14:03.908 4975.881 - 5000.154: 99.9926% ( 2) 00:14:03.908 5995.330 - 6019.603: 100.0000% ( 1) 00:14:03.908 00:14:03.908 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:03.908 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:03.908 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:03.908 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:03.908 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:04.165 [ 00:14:04.165 { 00:14:04.165 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.165 "subtype": "Discovery", 00:14:04.165 "listen_addresses": [], 00:14:04.165 "allow_any_host": true, 00:14:04.165 "hosts": [] 00:14:04.165 }, 00:14:04.165 { 00:14:04.165 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:04.165 "subtype": "NVMe", 00:14:04.165 "listen_addresses": [ 00:14:04.165 { 00:14:04.165 "trtype": "VFIOUSER", 00:14:04.165 "adrfam": "IPv4", 00:14:04.165 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:04.165 "trsvcid": "0" 00:14:04.165 } 00:14:04.165 ], 00:14:04.165 "allow_any_host": true, 00:14:04.165 "hosts": [], 00:14:04.165 "serial_number": "SPDK1", 00:14:04.165 "model_number": "SPDK bdev Controller", 00:14:04.165 "max_namespaces": 32, 00:14:04.165 "min_cntlid": 1, 00:14:04.165 "max_cntlid": 65519, 00:14:04.165 "namespaces": [ 00:14:04.165 { 00:14:04.165 "nsid": 1, 00:14:04.165 "bdev_name": "Malloc1", 00:14:04.165 "name": "Malloc1", 00:14:04.165 "nguid": "2AEC63D427584F9DB4D41F548960B092", 00:14:04.165 "uuid": "2aec63d4-2758-4f9d-b4d4-1f548960b092" 00:14:04.165 } 00:14:04.165 ] 00:14:04.165 }, 00:14:04.165 { 00:14:04.165 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:04.165 "subtype": "NVMe", 00:14:04.165 "listen_addresses": [ 00:14:04.165 { 00:14:04.165 "trtype": "VFIOUSER", 00:14:04.165 "adrfam": "IPv4", 00:14:04.165 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:04.165 "trsvcid": "0" 00:14:04.165 } 00:14:04.165 ], 00:14:04.165 "allow_any_host": true, 00:14:04.165 "hosts": [], 00:14:04.165 "serial_number": "SPDK2", 00:14:04.165 "model_number": "SPDK bdev Controller", 00:14:04.165 "max_namespaces": 32, 00:14:04.165 "min_cntlid": 1, 00:14:04.165 "max_cntlid": 65519, 00:14:04.165 "namespaces": [ 00:14:04.165 { 00:14:04.165 "nsid": 1, 00:14:04.165 "bdev_name": "Malloc2", 00:14:04.165 "name": "Malloc2", 00:14:04.165 "nguid": "04904C8662854011880DBCC25486F673", 00:14:04.165 "uuid": "04904c86-6285-4011-880d-bcc25486f673" 00:14:04.165 } 00:14:04.165 ] 00:14:04.165 } 00:14:04.165 ] 00:14:04.165 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=952920 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:04.166 12:09:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:04.166 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.166 [2024-07-22 12:09:12.034474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.423 Malloc3 00:14:04.423 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:04.680 [2024-07-22 12:09:12.410304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.680 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:04.680 Asynchronous Event Request test 00:14:04.680 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.680 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.680 Registering asynchronous event callbacks... 00:14:04.680 Starting namespace attribute notice tests for all controllers... 00:14:04.680 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:04.680 aer_cb - Changed Namespace 00:14:04.680 Cleaning up... 00:14:04.938 [ 00:14:04.938 { 00:14:04.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.938 "subtype": "Discovery", 00:14:04.938 "listen_addresses": [], 00:14:04.938 "allow_any_host": true, 00:14:04.938 "hosts": [] 00:14:04.938 }, 00:14:04.938 { 00:14:04.938 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:04.938 "subtype": "NVMe", 00:14:04.938 "listen_addresses": [ 00:14:04.938 { 00:14:04.938 "trtype": "VFIOUSER", 00:14:04.938 "adrfam": "IPv4", 00:14:04.938 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:04.938 "trsvcid": "0" 00:14:04.938 } 00:14:04.938 ], 00:14:04.938 "allow_any_host": true, 00:14:04.938 "hosts": [], 00:14:04.938 "serial_number": "SPDK1", 00:14:04.938 "model_number": "SPDK bdev Controller", 00:14:04.938 "max_namespaces": 32, 00:14:04.938 "min_cntlid": 1, 00:14:04.938 "max_cntlid": 65519, 00:14:04.938 "namespaces": [ 00:14:04.938 { 00:14:04.938 "nsid": 1, 00:14:04.938 "bdev_name": "Malloc1", 00:14:04.938 "name": "Malloc1", 00:14:04.938 "nguid": "2AEC63D427584F9DB4D41F548960B092", 00:14:04.938 "uuid": "2aec63d4-2758-4f9d-b4d4-1f548960b092" 00:14:04.938 }, 00:14:04.938 { 00:14:04.938 "nsid": 2, 00:14:04.938 "bdev_name": "Malloc3", 00:14:04.938 "name": "Malloc3", 00:14:04.938 "nguid": "7BA9D3369E21455C95F0CC2F8F918381", 00:14:04.938 "uuid": "7ba9d336-9e21-455c-95f0-cc2f8f918381" 00:14:04.938 } 00:14:04.938 ] 00:14:04.938 }, 00:14:04.938 { 00:14:04.938 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:04.938 "subtype": "NVMe", 00:14:04.938 "listen_addresses": [ 00:14:04.938 { 00:14:04.938 "trtype": "VFIOUSER", 00:14:04.938 "adrfam": "IPv4", 00:14:04.938 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:04.938 "trsvcid": "0" 00:14:04.938 } 00:14:04.938 ], 00:14:04.938 "allow_any_host": true, 00:14:04.938 "hosts": [], 00:14:04.938 "serial_number": "SPDK2", 00:14:04.938 "model_number": "SPDK bdev Controller", 00:14:04.938 "max_namespaces": 32, 00:14:04.938 "min_cntlid": 1, 00:14:04.938 "max_cntlid": 65519, 00:14:04.938 "namespaces": [ 00:14:04.938 { 00:14:04.938 "nsid": 1, 00:14:04.938 "bdev_name": "Malloc2", 00:14:04.938 "name": "Malloc2", 00:14:04.938 "nguid": "04904C8662854011880DBCC25486F673", 00:14:04.938 "uuid": "04904c86-6285-4011-880d-bcc25486f673" 00:14:04.938 } 00:14:04.938 ] 00:14:04.938 } 00:14:04.938 ] 00:14:04.938 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 952920 00:14:04.938 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.938 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:04.938 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:04.938 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:04.939 [2024-07-22 12:09:12.690109] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:14:04.939 [2024-07-22 12:09:12.690152] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953047 ] 00:14:04.939 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.939 [2024-07-22 12:09:12.708211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:04.939 [2024-07-22 12:09:12.724807] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:04.939 [2024-07-22 12:09:12.733921] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.939 [2024-07-22 12:09:12.733954] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3ca18b1000 00:14:04.939 [2024-07-22 12:09:12.734907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.735928] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.736932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.737943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.738950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.739957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.740969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.741982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.939 [2024-07-22 12:09:12.742988] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.939 [2024-07-22 12:09:12.743009] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3ca0673000 00:14:04.939 [2024-07-22 12:09:12.744148] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.939 [2024-07-22 12:09:12.758929] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:04.939 [2024-07-22 12:09:12.758979] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:04.939 [2024-07-22 12:09:12.761061] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:04.939 [2024-07-22 12:09:12.761116] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:04.939 [2024-07-22 12:09:12.761203] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:04.939 [2024-07-22 12:09:12.761227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:04.939 [2024-07-22 12:09:12.761237] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:04.939 [2024-07-22 12:09:12.762068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:04.939 [2024-07-22 12:09:12.762093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:04.939 [2024-07-22 12:09:12.762106] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:04.939 [2024-07-22 12:09:12.763072] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:04.939 [2024-07-22 12:09:12.763095] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:04.939 [2024-07-22 12:09:12.763109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.939 [2024-07-22 12:09:12.764083] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:04.939 [2024-07-22 12:09:12.764103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.939 [2024-07-22 12:09:12.765088] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:04.939 [2024-07-22 12:09:12.765108] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:04.939 [2024-07-22 12:09:12.765117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:04.939 [2024-07-22 12:09:12.765128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.939 [2024-07-22 12:09:12.765242] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:04.939 [2024-07-22 12:09:12.765251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.939 [2024-07-22 12:09:12.765259] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:04.939 [2024-07-22 12:09:12.766099] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:04.939 [2024-07-22 12:09:12.769625] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:04.939 [2024-07-22 12:09:12.770131] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:04.939 [2024-07-22 12:09:12.771122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.939 [2024-07-22 12:09:12.771207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.939 [2024-07-22 12:09:12.772145] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:04.939 [2024-07-22 12:09:12.772165] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.939 [2024-07-22 12:09:12.772174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.772197] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:04.939 [2024-07-22 12:09:12.772209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.772232] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.939 [2024-07-22 12:09:12.772241] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.939 [2024-07-22 12:09:12.772248] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.939 [2024-07-22 12:09:12.772268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.780631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.780660] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:04.939 [2024-07-22 12:09:12.780670] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:04.939 [2024-07-22 12:09:12.780677] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:04.939 [2024-07-22 12:09:12.780685] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:04.939 [2024-07-22 12:09:12.780693] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:04.939 [2024-07-22 12:09:12.780700] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:04.939 [2024-07-22 12:09:12.780708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.780722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.780741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.788657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.939 [2024-07-22 12:09:12.788672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.939 [2024-07-22 12:09:12.788684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.939 [2024-07-22 12:09:12.788696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.939 [2024-07-22 12:09:12.788705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.788720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.788734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.796625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.796644] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:04.939 [2024-07-22 12:09:12.796653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.796664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.796675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.796689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.804622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.804698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.804715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.804727] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:04.939 [2024-07-22 12:09:12.804736] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:04.939 [2024-07-22 12:09:12.804742] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.939 [2024-07-22 12:09:12.804752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.812642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.812666] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:04.939 [2024-07-22 12:09:12.812686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.812706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.812720] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.939 [2024-07-22 12:09:12.812728] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.939 [2024-07-22 12:09:12.812735] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.939 [2024-07-22 12:09:12.812745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.820622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.820650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.820666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.820679] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.939 [2024-07-22 12:09:12.820688] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.939 [2024-07-22 12:09:12.820694] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.939 [2024-07-22 12:09:12.820703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.828625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.828646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828711] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:04.939 [2024-07-22 12:09:12.828719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:04.939 [2024-07-22 12:09:12.828727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:04.939 [2024-07-22 12:09:12.828754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.836626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.836653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.844626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.844652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.852626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.852651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.939 [2024-07-22 12:09:12.860624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:04.939 [2024-07-22 12:09:12.860655] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:04.939 [2024-07-22 12:09:12.860666] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:04.940 [2024-07-22 12:09:12.860672] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:04.940 [2024-07-22 12:09:12.860678] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:04.940 [2024-07-22 12:09:12.860684] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:04.940 [2024-07-22 12:09:12.860694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:04.940 [2024-07-22 12:09:12.860705] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:04.940 [2024-07-22 12:09:12.860713] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:04.940 [2024-07-22 12:09:12.860719] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.940 [2024-07-22 12:09:12.860728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:04.940 [2024-07-22 12:09:12.860739] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:04.940 [2024-07-22 12:09:12.860747] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.940 [2024-07-22 12:09:12.860753] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.940 [2024-07-22 12:09:12.860762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.940 [2024-07-22 12:09:12.860773] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:04.940 [2024-07-22 12:09:12.860781] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:04.940 [2024-07-22 12:09:12.860787] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.940 [2024-07-22 12:09:12.860796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:05.197 [2024-07-22 12:09:12.868627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:05.197 [2024-07-22 12:09:12.868668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:05.197 [2024-07-22 12:09:12.868686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:05.197 [2024-07-22 12:09:12.868698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:05.197 ===================================================== 00:14:05.197 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.197 ===================================================== 00:14:05.197 Controller Capabilities/Features 00:14:05.197 ================================ 00:14:05.197 Vendor ID: 4e58 00:14:05.197 Subsystem Vendor ID: 4e58 00:14:05.197 Serial Number: SPDK2 00:14:05.197 Model Number: SPDK bdev Controller 00:14:05.197 Firmware Version: 24.09 00:14:05.197 Recommended Arb Burst: 6 00:14:05.197 IEEE OUI Identifier: 8d 6b 50 00:14:05.197 Multi-path I/O 00:14:05.197 May have multiple subsystem ports: Yes 00:14:05.197 May have multiple controllers: Yes 00:14:05.197 Associated with SR-IOV VF: No 00:14:05.197 Max Data Transfer Size: 131072 00:14:05.197 Max Number of Namespaces: 32 00:14:05.197 Max Number of I/O Queues: 127 00:14:05.197 NVMe Specification Version (VS): 1.3 00:14:05.197 NVMe Specification Version (Identify): 1.3 00:14:05.197 Maximum Queue Entries: 256 00:14:05.197 Contiguous Queues Required: Yes 00:14:05.197 Arbitration Mechanisms Supported 00:14:05.197 Weighted Round Robin: Not Supported 00:14:05.197 Vendor Specific: Not Supported 00:14:05.197 Reset Timeout: 15000 ms 00:14:05.197 Doorbell Stride: 4 bytes 00:14:05.197 NVM Subsystem Reset: Not Supported 00:14:05.197 Command Sets Supported 00:14:05.197 NVM Command Set: Supported 00:14:05.197 Boot Partition: Not Supported 00:14:05.197 Memory Page Size Minimum: 4096 bytes 00:14:05.197 Memory Page Size Maximum: 4096 bytes 00:14:05.197 Persistent Memory Region: Not Supported 00:14:05.197 Optional Asynchronous Events Supported 00:14:05.197 Namespace Attribute Notices: Supported 00:14:05.197 Firmware Activation Notices: Not Supported 00:14:05.197 ANA Change Notices: Not Supported 00:14:05.197 PLE Aggregate Log Change Notices: Not Supported 00:14:05.197 LBA Status Info Alert Notices: Not Supported 00:14:05.197 EGE Aggregate Log Change Notices: Not Supported 00:14:05.198 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.198 Zone Descriptor Change Notices: Not Supported 00:14:05.198 Discovery Log Change Notices: Not Supported 00:14:05.198 Controller Attributes 00:14:05.198 128-bit Host Identifier: Supported 00:14:05.198 Non-Operational Permissive Mode: Not Supported 00:14:05.198 NVM Sets: Not Supported 00:14:05.198 Read Recovery Levels: Not Supported 00:14:05.198 Endurance Groups: Not Supported 00:14:05.198 Predictable Latency Mode: Not Supported 00:14:05.198 Traffic Based Keep ALive: Not Supported 00:14:05.198 Namespace Granularity: Not Supported 00:14:05.198 SQ Associations: Not Supported 00:14:05.198 UUID List: Not Supported 00:14:05.198 Multi-Domain Subsystem: Not Supported 00:14:05.198 Fixed Capacity Management: Not Supported 00:14:05.198 Variable Capacity Management: Not Supported 00:14:05.198 Delete Endurance Group: Not Supported 00:14:05.198 Delete NVM Set: Not Supported 00:14:05.198 Extended LBA Formats Supported: Not Supported 00:14:05.198 Flexible Data Placement Supported: Not Supported 00:14:05.198 00:14:05.198 Controller Memory Buffer Support 00:14:05.198 ================================ 00:14:05.198 Supported: No 00:14:05.198 00:14:05.198 Persistent Memory Region Support 00:14:05.198 ================================ 00:14:05.198 Supported: No 00:14:05.198 00:14:05.198 Admin Command Set Attributes 00:14:05.198 ============================ 00:14:05.198 Security Send/Receive: Not Supported 00:14:05.198 Format NVM: Not Supported 00:14:05.198 Firmware Activate/Download: Not Supported 00:14:05.198 Namespace Management: Not Supported 00:14:05.198 Device Self-Test: Not Supported 00:14:05.198 Directives: Not Supported 00:14:05.198 NVMe-MI: Not Supported 00:14:05.198 Virtualization Management: Not Supported 00:14:05.198 Doorbell Buffer Config: Not Supported 00:14:05.198 Get LBA Status Capability: Not Supported 00:14:05.198 Command & Feature Lockdown Capability: Not Supported 00:14:05.198 Abort Command Limit: 4 00:14:05.198 Async Event Request Limit: 4 00:14:05.198 Number of Firmware Slots: N/A 00:14:05.198 Firmware Slot 1 Read-Only: N/A 00:14:05.198 Firmware Activation Without Reset: N/A 00:14:05.198 Multiple Update Detection Support: N/A 00:14:05.198 Firmware Update Granularity: No Information Provided 00:14:05.198 Per-Namespace SMART Log: No 00:14:05.198 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.198 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:05.198 Command Effects Log Page: Supported 00:14:05.198 Get Log Page Extended Data: Supported 00:14:05.198 Telemetry Log Pages: Not Supported 00:14:05.198 Persistent Event Log Pages: Not Supported 00:14:05.198 Supported Log Pages Log Page: May Support 00:14:05.198 Commands Supported & Effects Log Page: Not Supported 00:14:05.198 Feature Identifiers & Effects Log Page:May Support 00:14:05.198 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.198 Data Area 4 for Telemetry Log: Not Supported 00:14:05.198 Error Log Page Entries Supported: 128 00:14:05.198 Keep Alive: Supported 00:14:05.198 Keep Alive Granularity: 10000 ms 00:14:05.198 00:14:05.198 NVM Command Set Attributes 00:14:05.198 ========================== 00:14:05.198 Submission Queue Entry Size 00:14:05.198 Max: 64 00:14:05.198 Min: 64 00:14:05.198 Completion Queue Entry Size 00:14:05.198 Max: 16 00:14:05.198 Min: 16 00:14:05.198 Number of Namespaces: 32 00:14:05.198 Compare Command: Supported 00:14:05.198 Write Uncorrectable Command: Not Supported 00:14:05.198 Dataset Management Command: Supported 00:14:05.198 Write Zeroes Command: Supported 00:14:05.198 Set Features Save Field: Not Supported 00:14:05.198 Reservations: Not Supported 00:14:05.198 Timestamp: Not Supported 00:14:05.198 Copy: Supported 00:14:05.198 Volatile Write Cache: Present 00:14:05.198 Atomic Write Unit (Normal): 1 00:14:05.198 Atomic Write Unit (PFail): 1 00:14:05.198 Atomic Compare & Write Unit: 1 00:14:05.198 Fused Compare & Write: Supported 00:14:05.198 Scatter-Gather List 00:14:05.198 SGL Command Set: Supported (Dword aligned) 00:14:05.198 SGL Keyed: Not Supported 00:14:05.198 SGL Bit Bucket Descriptor: Not Supported 00:14:05.198 SGL Metadata Pointer: Not Supported 00:14:05.198 Oversized SGL: Not Supported 00:14:05.198 SGL Metadata Address: Not Supported 00:14:05.198 SGL Offset: Not Supported 00:14:05.198 Transport SGL Data Block: Not Supported 00:14:05.198 Replay Protected Memory Block: Not Supported 00:14:05.198 00:14:05.198 Firmware Slot Information 00:14:05.198 ========================= 00:14:05.198 Active slot: 1 00:14:05.198 Slot 1 Firmware Revision: 24.09 00:14:05.198 00:14:05.198 00:14:05.198 Commands Supported and Effects 00:14:05.198 ============================== 00:14:05.198 Admin Commands 00:14:05.198 -------------- 00:14:05.198 Get Log Page (02h): Supported 00:14:05.198 Identify (06h): Supported 00:14:05.198 Abort (08h): Supported 00:14:05.198 Set Features (09h): Supported 00:14:05.198 Get Features (0Ah): Supported 00:14:05.198 Asynchronous Event Request (0Ch): Supported 00:14:05.198 Keep Alive (18h): Supported 00:14:05.198 I/O Commands 00:14:05.198 ------------ 00:14:05.198 Flush (00h): Supported LBA-Change 00:14:05.198 Write (01h): Supported LBA-Change 00:14:05.198 Read (02h): Supported 00:14:05.198 Compare (05h): Supported 00:14:05.198 Write Zeroes (08h): Supported LBA-Change 00:14:05.198 Dataset Management (09h): Supported LBA-Change 00:14:05.198 Copy (19h): Supported LBA-Change 00:14:05.198 00:14:05.198 Error Log 00:14:05.198 ========= 00:14:05.198 00:14:05.198 Arbitration 00:14:05.198 =========== 00:14:05.198 Arbitration Burst: 1 00:14:05.198 00:14:05.198 Power Management 00:14:05.198 ================ 00:14:05.198 Number of Power States: 1 00:14:05.198 Current Power State: Power State #0 00:14:05.198 Power State #0: 00:14:05.198 Max Power: 0.00 W 00:14:05.198 Non-Operational State: Operational 00:14:05.198 Entry Latency: Not Reported 00:14:05.198 Exit Latency: Not Reported 00:14:05.198 Relative Read Throughput: 0 00:14:05.198 Relative Read Latency: 0 00:14:05.198 Relative Write Throughput: 0 00:14:05.198 Relative Write Latency: 0 00:14:05.198 Idle Power: Not Reported 00:14:05.198 Active Power: Not Reported 00:14:05.198 Non-Operational Permissive Mode: Not Supported 00:14:05.198 00:14:05.198 Health Information 00:14:05.198 ================== 00:14:05.198 Critical Warnings: 00:14:05.198 Available Spare Space: OK 00:14:05.198 Temperature: OK 00:14:05.198 Device Reliability: OK 00:14:05.198 Read Only: No 00:14:05.198 Volatile Memory Backup: OK 00:14:05.198 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:05.198 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:05.198 Available Spare: 0% 00:14:05.198 Available Sp[2024-07-22 12:09:12.868812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:05.198 [2024-07-22 12:09:12.876625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:05.198 [2024-07-22 12:09:12.876679] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:05.198 [2024-07-22 12:09:12.876697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.198 [2024-07-22 12:09:12.876707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.198 [2024-07-22 12:09:12.876717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.198 [2024-07-22 12:09:12.876727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.198 [2024-07-22 12:09:12.876792] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:05.198 [2024-07-22 12:09:12.876813] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:05.198 [2024-07-22 12:09:12.877792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.198 [2024-07-22 12:09:12.877879] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:05.198 [2024-07-22 12:09:12.877895] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:05.198 [2024-07-22 12:09:12.878804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:05.198 [2024-07-22 12:09:12.878828] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:05.198 [2024-07-22 12:09:12.878880] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:05.198 [2024-07-22 12:09:12.880069] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:05.198 are Threshold: 0% 00:14:05.198 Life Percentage Used: 0% 00:14:05.198 Data Units Read: 0 00:14:05.198 Data Units Written: 0 00:14:05.198 Host Read Commands: 0 00:14:05.198 Host Write Commands: 0 00:14:05.198 Controller Busy Time: 0 minutes 00:14:05.198 Power Cycles: 0 00:14:05.198 Power On Hours: 0 hours 00:14:05.198 Unsafe Shutdowns: 0 00:14:05.198 Unrecoverable Media Errors: 0 00:14:05.198 Lifetime Error Log Entries: 0 00:14:05.198 Warning Temperature Time: 0 minutes 00:14:05.198 Critical Temperature Time: 0 minutes 00:14:05.198 00:14:05.198 Number of Queues 00:14:05.198 ================ 00:14:05.199 Number of I/O Submission Queues: 127 00:14:05.199 Number of I/O Completion Queues: 127 00:14:05.199 00:14:05.199 Active Namespaces 00:14:05.199 ================= 00:14:05.199 Namespace ID:1 00:14:05.199 Error Recovery Timeout: Unlimited 00:14:05.199 Command Set Identifier: NVM (00h) 00:14:05.199 Deallocate: Supported 00:14:05.199 Deallocated/Unwritten Error: Not Supported 00:14:05.199 Deallocated Read Value: Unknown 00:14:05.199 Deallocate in Write Zeroes: Not Supported 00:14:05.199 Deallocated Guard Field: 0xFFFF 00:14:05.199 Flush: Supported 00:14:05.199 Reservation: Supported 00:14:05.199 Namespace Sharing Capabilities: Multiple Controllers 00:14:05.199 Size (in LBAs): 131072 (0GiB) 00:14:05.199 Capacity (in LBAs): 131072 (0GiB) 00:14:05.199 Utilization (in LBAs): 131072 (0GiB) 00:14:05.199 NGUID: 04904C8662854011880DBCC25486F673 00:14:05.199 UUID: 04904c86-6285-4011-880d-bcc25486f673 00:14:05.199 Thin Provisioning: Not Supported 00:14:05.199 Per-NS Atomic Units: Yes 00:14:05.199 Atomic Boundary Size (Normal): 0 00:14:05.199 Atomic Boundary Size (PFail): 0 00:14:05.199 Atomic Boundary Offset: 0 00:14:05.199 Maximum Single Source Range Length: 65535 00:14:05.199 Maximum Copy Length: 65535 00:14:05.199 Maximum Source Range Count: 1 00:14:05.199 NGUID/EUI64 Never Reused: No 00:14:05.199 Namespace Write Protected: No 00:14:05.199 Number of LBA Formats: 1 00:14:05.199 Current LBA Format: LBA Format #00 00:14:05.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:05.199 00:14:05.199 12:09:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:05.199 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.199 [2024-07-22 12:09:13.105461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.461 Initializing NVMe Controllers 00:14:10.461 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:10.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:10.461 Initialization complete. Launching workers. 00:14:10.461 ======================================================== 00:14:10.461 Latency(us) 00:14:10.461 Device Information : IOPS MiB/s Average min max 00:14:10.461 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35342.12 138.06 3620.83 1150.40 7568.44 00:14:10.461 ======================================================== 00:14:10.461 Total : 35342.12 138.06 3620.83 1150.40 7568.44 00:14:10.461 00:14:10.461 [2024-07-22 12:09:18.210016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.461 12:09:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:10.461 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.749 [2024-07-22 12:09:18.453719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:16.045 Initializing NVMe Controllers 00:14:16.045 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:16.045 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:16.045 Initialization complete. Launching workers. 00:14:16.045 ======================================================== 00:14:16.045 Latency(us) 00:14:16.045 Device Information : IOPS MiB/s Average min max 00:14:16.045 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32419.59 126.64 3949.45 1197.14 11576.86 00:14:16.045 ======================================================== 00:14:16.045 Total : 32419.59 126.64 3949.45 1197.14 11576.86 00:14:16.045 00:14:16.045 [2024-07-22 12:09:23.480720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:16.045 12:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:16.045 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.045 [2024-07-22 12:09:23.688416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.302 [2024-07-22 12:09:28.825782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.302 Initializing NVMe Controllers 00:14:21.302 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:21.302 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:21.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:21.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:21.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:21.302 Initialization complete. Launching workers. 00:14:21.302 Starting thread on core 2 00:14:21.302 Starting thread on core 3 00:14:21.302 Starting thread on core 1 00:14:21.302 12:09:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:21.303 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.303 [2024-07-22 12:09:29.134403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.581 [2024-07-22 12:09:32.218359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.581 Initializing NVMe Controllers 00:14:24.581 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.581 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:24.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:24.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:24.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:24.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:24.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:24.581 Initialization complete. Launching workers. 00:14:24.581 Starting thread on core 1 with urgent priority queue 00:14:24.582 Starting thread on core 2 with urgent priority queue 00:14:24.582 Starting thread on core 3 with urgent priority queue 00:14:24.582 Starting thread on core 0 with urgent priority queue 00:14:24.582 SPDK bdev Controller (SPDK2 ) core 0: 5158.33 IO/s 19.39 secs/100000 ios 00:14:24.582 SPDK bdev Controller (SPDK2 ) core 1: 5168.67 IO/s 19.35 secs/100000 ios 00:14:24.582 SPDK bdev Controller (SPDK2 ) core 2: 5272.33 IO/s 18.97 secs/100000 ios 00:14:24.582 SPDK bdev Controller (SPDK2 ) core 3: 5368.67 IO/s 18.63 secs/100000 ios 00:14:24.582 ======================================================== 00:14:24.582 00:14:24.582 12:09:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:24.582 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.839 [2024-07-22 12:09:32.529173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.839 Initializing NVMe Controllers 00:14:24.839 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.839 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.839 Namespace ID: 1 size: 0GB 00:14:24.839 Initialization complete. 00:14:24.839 INFO: using host memory buffer for IO 00:14:24.839 Hello world! 00:14:24.839 [2024-07-22 12:09:32.538355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.839 12:09:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:24.839 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.095 [2024-07-22 12:09:32.835197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.025 Initializing NVMe Controllers 00:14:26.025 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.025 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.025 Initialization complete. Launching workers. 00:14:26.025 submit (in ns) avg, min, max = 6867.8, 3503.3, 4017224.4 00:14:26.025 complete (in ns) avg, min, max = 26082.5, 2067.8, 4018737.8 00:14:26.025 00:14:26.025 Submit histogram 00:14:26.025 ================ 00:14:26.025 Range in us Cumulative Count 00:14:26.025 3.484 - 3.508: 0.0074% ( 1) 00:14:26.025 3.508 - 3.532: 0.8827% ( 119) 00:14:26.025 3.532 - 3.556: 2.0597% ( 160) 00:14:26.025 3.556 - 3.579: 6.1571% ( 557) 00:14:26.025 3.579 - 3.603: 12.5497% ( 869) 00:14:26.025 3.603 - 3.627: 22.7233% ( 1383) 00:14:26.025 3.627 - 3.650: 32.5070% ( 1330) 00:14:26.025 3.650 - 3.674: 41.8420% ( 1269) 00:14:26.025 3.674 - 3.698: 48.6832% ( 930) 00:14:26.025 3.698 - 3.721: 54.1710% ( 746) 00:14:26.025 3.721 - 3.745: 57.9300% ( 511) 00:14:26.025 3.745 - 3.769: 61.6963% ( 512) 00:14:26.025 3.769 - 3.793: 65.1464% ( 469) 00:14:26.025 3.793 - 3.816: 68.4861% ( 454) 00:14:26.025 3.816 - 3.840: 72.2083% ( 506) 00:14:26.025 3.840 - 3.864: 76.1586% ( 537) 00:14:26.025 3.864 - 3.887: 80.2266% ( 553) 00:14:26.025 3.887 - 3.911: 83.5001% ( 445) 00:14:26.025 3.911 - 3.935: 86.1557% ( 361) 00:14:26.025 3.935 - 3.959: 87.9064% ( 238) 00:14:26.025 3.959 - 3.982: 89.3188% ( 192) 00:14:26.025 3.982 - 4.006: 90.6944% ( 187) 00:14:26.025 4.006 - 4.030: 91.7316% ( 141) 00:14:26.025 4.030 - 4.053: 92.5482% ( 111) 00:14:26.025 4.053 - 4.077: 93.5339% ( 134) 00:14:26.025 4.077 - 4.101: 94.2695% ( 100) 00:14:26.025 4.101 - 4.124: 94.9610% ( 94) 00:14:26.025 4.124 - 4.148: 95.5936% ( 86) 00:14:26.025 4.148 - 4.172: 96.0644% ( 64) 00:14:26.025 4.172 - 4.196: 96.2851% ( 30) 00:14:26.025 4.196 - 4.219: 96.5426% ( 35) 00:14:26.025 4.219 - 4.243: 96.7265% ( 25) 00:14:26.025 4.243 - 4.267: 96.8663% ( 19) 00:14:26.025 4.267 - 4.290: 96.9766% ( 15) 00:14:26.025 4.290 - 4.314: 97.0502% ( 10) 00:14:26.025 4.314 - 4.338: 97.1679% ( 16) 00:14:26.025 4.338 - 4.361: 97.2782% ( 15) 00:14:26.025 4.361 - 4.385: 97.3591% ( 11) 00:14:26.025 4.385 - 4.409: 97.4033% ( 6) 00:14:26.025 4.409 - 4.433: 97.4400% ( 5) 00:14:26.025 4.433 - 4.456: 97.4548% ( 2) 00:14:26.025 4.456 - 4.480: 97.4842% ( 4) 00:14:26.025 4.480 - 4.504: 97.5136% ( 4) 00:14:26.025 4.504 - 4.527: 97.5430% ( 4) 00:14:26.025 4.527 - 4.551: 97.5577% ( 2) 00:14:26.025 4.575 - 4.599: 97.5651% ( 1) 00:14:26.025 4.670 - 4.693: 97.5725% ( 1) 00:14:26.025 4.717 - 4.741: 97.5945% ( 3) 00:14:26.025 4.741 - 4.764: 97.6092% ( 2) 00:14:26.025 4.764 - 4.788: 97.6240% ( 2) 00:14:26.025 4.788 - 4.812: 97.6387% ( 2) 00:14:26.025 4.812 - 4.836: 97.6681% ( 4) 00:14:26.025 4.836 - 4.859: 97.7122% ( 6) 00:14:26.025 4.859 - 4.883: 97.7417% ( 4) 00:14:26.025 4.883 - 4.907: 97.7784% ( 5) 00:14:26.025 4.907 - 4.930: 97.8226% ( 6) 00:14:26.025 4.930 - 4.954: 97.8888% ( 9) 00:14:26.025 4.954 - 4.978: 97.9182% ( 4) 00:14:26.025 4.978 - 5.001: 97.9770% ( 8) 00:14:26.025 5.001 - 5.025: 98.0359% ( 8) 00:14:26.025 5.025 - 5.049: 98.0653% ( 4) 00:14:26.025 5.049 - 5.073: 98.1095% ( 6) 00:14:26.025 5.073 - 5.096: 98.1536% ( 6) 00:14:26.025 5.096 - 5.120: 98.1757% ( 3) 00:14:26.025 5.120 - 5.144: 98.1904% ( 2) 00:14:26.025 5.144 - 5.167: 98.2272% ( 5) 00:14:26.025 5.167 - 5.191: 98.2419% ( 2) 00:14:26.025 5.215 - 5.239: 98.2492% ( 1) 00:14:26.025 5.239 - 5.262: 98.2566% ( 1) 00:14:26.025 5.262 - 5.286: 98.2713% ( 2) 00:14:26.025 5.310 - 5.333: 98.2934% ( 3) 00:14:26.025 5.333 - 5.357: 98.3081% ( 2) 00:14:26.025 5.357 - 5.381: 98.3228% ( 2) 00:14:26.025 5.381 - 5.404: 98.3301% ( 1) 00:14:26.025 5.428 - 5.452: 98.3375% ( 1) 00:14:26.025 5.452 - 5.476: 98.3449% ( 1) 00:14:26.025 5.476 - 5.499: 98.3522% ( 1) 00:14:26.025 5.499 - 5.523: 98.3596% ( 1) 00:14:26.025 5.547 - 5.570: 98.3669% ( 1) 00:14:26.025 5.570 - 5.594: 98.3743% ( 1) 00:14:26.025 5.689 - 5.713: 98.3816% ( 1) 00:14:26.025 5.926 - 5.950: 98.3890% ( 1) 00:14:26.025 6.116 - 6.163: 98.4037% ( 2) 00:14:26.025 6.305 - 6.353: 98.4184% ( 2) 00:14:26.025 6.353 - 6.400: 98.4258% ( 1) 00:14:26.025 6.400 - 6.447: 98.4405% ( 2) 00:14:26.025 6.495 - 6.542: 98.4478% ( 1) 00:14:26.025 6.590 - 6.637: 98.4552% ( 1) 00:14:26.025 6.684 - 6.732: 98.4626% ( 1) 00:14:26.025 6.732 - 6.779: 98.4699% ( 1) 00:14:26.025 6.827 - 6.874: 98.4773% ( 1) 00:14:26.025 6.874 - 6.921: 98.4920% ( 2) 00:14:26.025 6.921 - 6.969: 98.5067% ( 2) 00:14:26.025 6.969 - 7.016: 98.5141% ( 1) 00:14:26.025 7.016 - 7.064: 98.5214% ( 1) 00:14:26.025 7.064 - 7.111: 98.5435% ( 3) 00:14:26.025 7.111 - 7.159: 98.5508% ( 1) 00:14:26.025 7.159 - 7.206: 98.5655% ( 2) 00:14:26.025 7.206 - 7.253: 98.5729% ( 1) 00:14:26.025 7.253 - 7.301: 98.5876% ( 2) 00:14:26.025 7.301 - 7.348: 98.5950% ( 1) 00:14:26.025 7.348 - 7.396: 98.6023% ( 1) 00:14:26.025 7.396 - 7.443: 98.6097% ( 1) 00:14:26.025 7.443 - 7.490: 98.6244% ( 2) 00:14:26.025 7.490 - 7.538: 98.6391% ( 2) 00:14:26.025 7.585 - 7.633: 98.6538% ( 2) 00:14:26.025 7.633 - 7.680: 98.6612% ( 1) 00:14:26.025 7.680 - 7.727: 98.6685% ( 1) 00:14:26.025 7.727 - 7.775: 98.6759% ( 1) 00:14:26.025 7.775 - 7.822: 98.6832% ( 1) 00:14:26.025 7.822 - 7.870: 98.7200% ( 5) 00:14:26.025 7.870 - 7.917: 98.7421% ( 3) 00:14:26.025 7.964 - 8.012: 98.7568% ( 2) 00:14:26.025 8.107 - 8.154: 98.7715% ( 2) 00:14:26.025 8.154 - 8.201: 98.7789% ( 1) 00:14:26.025 8.296 - 8.344: 98.7862% ( 1) 00:14:26.025 8.391 - 8.439: 98.7936% ( 1) 00:14:26.025 8.486 - 8.533: 98.8009% ( 1) 00:14:26.025 8.676 - 8.723: 98.8157% ( 2) 00:14:26.025 8.770 - 8.818: 98.8230% ( 1) 00:14:26.025 8.818 - 8.865: 98.8304% ( 1) 00:14:26.025 8.913 - 8.960: 98.8377% ( 1) 00:14:26.025 9.197 - 9.244: 98.8598% ( 3) 00:14:26.025 9.719 - 9.766: 98.8671% ( 1) 00:14:26.025 9.766 - 9.813: 98.8745% ( 1) 00:14:26.025 10.145 - 10.193: 98.8819% ( 1) 00:14:26.025 10.619 - 10.667: 98.8892% ( 1) 00:14:26.025 10.809 - 10.856: 98.8966% ( 1) 00:14:26.025 10.856 - 10.904: 98.9039% ( 1) 00:14:26.025 11.188 - 11.236: 98.9186% ( 2) 00:14:26.025 11.710 - 11.757: 98.9260% ( 1) 00:14:26.025 11.804 - 11.852: 98.9334% ( 1) 00:14:26.025 13.179 - 13.274: 98.9407% ( 1) 00:14:26.025 13.369 - 13.464: 98.9481% ( 1) 00:14:26.025 13.653 - 13.748: 98.9554% ( 1) 00:14:26.025 13.938 - 14.033: 98.9701% ( 2) 00:14:26.026 15.360 - 15.455: 98.9775% ( 1) 00:14:26.026 17.067 - 17.161: 98.9922% ( 2) 00:14:26.026 17.351 - 17.446: 98.9996% ( 1) 00:14:26.026 17.446 - 17.541: 99.0290% ( 4) 00:14:26.026 17.541 - 17.636: 99.0511% ( 3) 00:14:26.026 17.636 - 17.730: 99.0878% ( 5) 00:14:26.026 17.730 - 17.825: 99.1320% ( 6) 00:14:26.026 17.825 - 17.920: 99.1761% ( 6) 00:14:26.026 17.920 - 18.015: 99.2276% ( 7) 00:14:26.026 18.015 - 18.110: 99.2791% ( 7) 00:14:26.026 18.110 - 18.204: 99.3674% ( 12) 00:14:26.026 18.204 - 18.299: 99.4630% ( 13) 00:14:26.026 18.299 - 18.394: 99.5586% ( 13) 00:14:26.026 18.394 - 18.489: 99.6101% ( 7) 00:14:26.026 18.489 - 18.584: 99.6837% ( 10) 00:14:26.026 18.584 - 18.679: 99.7425% ( 8) 00:14:26.026 18.679 - 18.773: 99.7646% ( 3) 00:14:26.026 18.773 - 18.868: 99.7793% ( 2) 00:14:26.026 18.868 - 18.963: 99.8087% ( 4) 00:14:26.026 18.963 - 19.058: 99.8308% ( 3) 00:14:26.026 19.058 - 19.153: 99.8529% ( 3) 00:14:26.026 19.247 - 19.342: 99.8602% ( 1) 00:14:26.026 19.816 - 19.911: 99.8676% ( 1) 00:14:26.026 19.911 - 20.006: 99.8823% ( 2) 00:14:26.026 20.290 - 20.385: 99.8897% ( 1) 00:14:26.026 20.575 - 20.670: 99.8970% ( 1) 00:14:26.026 21.144 - 21.239: 99.9044% ( 1) 00:14:26.026 21.239 - 21.333: 99.9117% ( 1) 00:14:26.026 28.065 - 28.255: 99.9191% ( 1) 00:14:26.026 28.634 - 28.824: 99.9264% ( 1) 00:14:26.026 3980.705 - 4004.978: 99.9853% ( 8) 00:14:26.026 4004.978 - 4029.250: 100.0000% ( 2) 00:14:26.026 00:14:26.026 Complete histogram 00:14:26.026 ================== 00:14:26.026 Range in us Cumulative Count 00:14:26.026 2.062 - 2.074: 1.6993% ( 231) 00:14:26.026 2.074 - 2.086: 34.1474% ( 4411) 00:14:26.026 2.086 - 2.098: 42.8351% ( 1181) 00:14:26.026 2.098 - 2.110: 46.9104% ( 554) 00:14:26.026 2.110 - 2.121: 57.7828% ( 1478) 00:14:26.026 2.121 - 2.133: 59.7690% ( 270) 00:14:26.026 2.133 - 2.145: 64.6903% ( 669) 00:14:26.026 2.145 - 2.157: 74.4299% ( 1324) 00:14:26.026 2.157 - 2.169: 75.6657% ( 168) 00:14:26.026 2.169 - 2.181: 78.0124% ( 319) 00:14:26.026 2.181 - 2.193: 80.7783% ( 376) 00:14:26.026 2.193 - 2.204: 81.4845% ( 96) 00:14:26.026 2.204 - 2.216: 83.2353% ( 238) 00:14:26.026 2.216 - 2.228: 88.6053% ( 730) 00:14:26.026 2.228 - 2.240: 90.4443% ( 250) 00:14:26.026 2.240 - 2.252: 91.7316% ( 175) 00:14:26.026 2.252 - 2.264: 93.2764% ( 210) 00:14:26.026 2.264 - 2.276: 93.7178% ( 60) 00:14:26.026 2.276 - 2.287: 94.0341% ( 43) 00:14:26.026 2.287 - 2.299: 94.7550% ( 98) 00:14:26.026 2.299 - 2.311: 95.2773% ( 71) 00:14:26.026 2.311 - 2.323: 95.4612% ( 25) 00:14:26.026 2.323 - 2.335: 95.5127% ( 7) 00:14:26.026 2.335 - 2.347: 95.5863% ( 10) 00:14:26.026 2.347 - 2.359: 95.6819% ( 13) 00:14:26.026 2.359 - 2.370: 95.7775% ( 13) 00:14:26.026 2.370 - 2.382: 96.0203% ( 33) 00:14:26.026 2.382 - 2.394: 96.3146% ( 40) 00:14:26.026 2.394 - 2.406: 96.5867% ( 37) 00:14:26.026 2.406 - 2.418: 96.8295% ( 33) 00:14:26.026 2.418 - 2.430: 96.9840% ( 21) 00:14:26.026 2.430 - 2.441: 97.2194% ( 32) 00:14:26.026 2.441 - 2.453: 97.3444% ( 17) 00:14:26.026 2.453 - 2.465: 97.4842% ( 19) 00:14:26.026 2.465 - 2.477: 97.6534% ( 23) 00:14:26.026 2.477 - 2.489: 97.7711% ( 16) 00:14:26.026 2.489 - 2.501: 97.9329% ( 22) 00:14:26.026 2.501 - 2.513: 98.0138% ( 11) 00:14:26.026 2.513 - 2.524: 98.0580% ( 6) 00:14:26.026 2.524 - 2.536: 98.1021% ( 6) 00:14:26.026 2.536 - 2.548: 98.1536% ( 7) 00:14:26.026 2.548 - 2.560: 98.1904% ( 5) 00:14:26.026 2.560 - 2.572: 98.2272% ( 5) 00:14:26.026 2.572 - 2.584: 98.2639% ( 5) 00:14:26.026 2.584 - 2.596: 98.3154% ( 7) 00:14:26.026 2.607 - 2.619: 98.3228% ( 1) 00:14:26.026 2.619 - 2.631: 98.3301% ( 1) 00:14:26.026 2.702 - 2.714: 98.3375% ( 1) 00:14:26.026 2.738 - 2.750: 98.3449% ( 1) 00:14:26.026 2.750 - 2.761: 98.3522% ( 1) 00:14:26.026 2.797 - 2.809: 98.3596% ( 1) 00:14:26.026 3.390 - 3.413: 98.3669% ( 1) 00:14:26.026 3.413 - 3.437: 98.3743% ( 1) 00:14:26.026 3.461 - 3.484: 98.3816% ( 1) 00:14:26.026 3.484 - 3.508: 98.4111% ( 4) 00:14:26.026 3.508 - 3.532: 98.4184% ( 1) 00:14:26.026 3.556 - 3.579: 98.4258% ( 1) 00:14:26.026 3.579 - 3.603: 98.4331% ( 1) 00:14:26.026 3.627 - 3.650: 98.4405% ( 1) 00:14:26.026 3.674 - 3.698: 98.4478% ( 1) 00:14:26.026 3.698 - 3.721: 98.4626% ( 2) 00:14:26.026 3.745 - 3.769: 98.4699% ( 1) 00:14:26.026 3.769 - 3.793: 98.4773% ( 1) 00:14:26.026 3.816 - 3.840: 98.4846% ( 1) 00:14:26.026 3.887 - 3.911: 98.4920% ( 1) 00:14:26.026 3.935 - 3.959: 98.4993% ( 1) 00:14:26.026 3.959 - 3.982: 98.5067% ( 1) 00:14:26.026 3.982 - 4.006: 98.5141% ( 1) 00:14:26.026 4.030 - 4.053: 98.5214% ( 1) 00:14:26.026 4.124 - 4.148: 98.5288% ( 1) 00:14:26.026 4.219 - 4.243: 98.5361% ( 1) 00:14:26.026 5.001 - 5.025: 98.5435% ( 1) 00:14:26.026 5.049 - 5.073: 98.5508% ( 1) 00:14:26.026 5.073 - 5.096: 98.5582% ( 1) 00:14:26.026 5.096 - 5.120: 98.5655% ( 1) 00:14:26.026 5.191 - 5.215: 98.5729% ( 1) 00:14:26.026 5.310 - 5.333: 98.5803% ( 1) 00:14:26.026 5.547 - 5.570: 98.5876% ( 1) 00:14:26.026 5.641 - 5.665: 98.5950% ( 1) 00:14:26.026 5.665 - 5.689: 98.6023% ( 1) 00:14:26.026 5.760 - 5.784: 98.6097% ( 1) 00:14:26.026 5.784 - 5.807: 98.6170% ( 1) 00:14:26.026 5.855 - 5.879: 98.6244% ( 1) 00:14:26.026 5.902 - 5.926: 9[2024-07-22 12:09:33.929356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.282 8.6317% ( 1) 00:14:26.282 5.950 - 5.973: 98.6391% ( 1) 00:14:26.282 5.997 - 6.021: 98.6538% ( 2) 00:14:26.282 6.068 - 6.116: 98.6685% ( 2) 00:14:26.282 6.258 - 6.305: 98.6759% ( 1) 00:14:26.282 6.305 - 6.353: 98.6832% ( 1) 00:14:26.282 6.353 - 6.400: 98.6906% ( 1) 00:14:26.282 6.400 - 6.447: 98.6980% ( 1) 00:14:26.282 6.447 - 6.495: 98.7053% ( 1) 00:14:26.282 6.779 - 6.827: 98.7127% ( 1) 00:14:26.282 15.455 - 15.550: 98.7200% ( 1) 00:14:26.282 15.550 - 15.644: 98.7274% ( 1) 00:14:26.282 15.644 - 15.739: 98.7347% ( 1) 00:14:26.282 15.739 - 15.834: 98.7715% ( 5) 00:14:26.282 15.834 - 15.929: 98.7862% ( 2) 00:14:26.282 15.929 - 16.024: 98.8157% ( 4) 00:14:26.282 16.024 - 16.119: 98.8892% ( 10) 00:14:26.282 16.119 - 16.213: 98.9113% ( 3) 00:14:26.282 16.213 - 16.308: 98.9481% ( 5) 00:14:26.282 16.308 - 16.403: 98.9701% ( 3) 00:14:26.282 16.403 - 16.498: 99.0216% ( 7) 00:14:26.282 16.498 - 16.593: 99.0952% ( 10) 00:14:26.282 16.593 - 16.687: 99.1761% ( 11) 00:14:26.282 16.687 - 16.782: 99.2055% ( 4) 00:14:26.282 16.782 - 16.877: 99.2497% ( 6) 00:14:26.282 16.877 - 16.972: 99.2644% ( 2) 00:14:26.282 17.067 - 17.161: 99.2791% ( 2) 00:14:26.282 17.161 - 17.256: 99.3012% ( 3) 00:14:26.282 17.256 - 17.351: 99.3232% ( 3) 00:14:26.282 17.351 - 17.446: 99.3306% ( 1) 00:14:26.282 17.446 - 17.541: 99.3453% ( 2) 00:14:26.282 17.636 - 17.730: 99.3527% ( 1) 00:14:26.282 17.825 - 17.920: 99.3600% ( 1) 00:14:26.282 17.920 - 18.015: 99.3674% ( 1) 00:14:26.282 18.015 - 18.110: 99.3747% ( 1) 00:14:26.282 18.204 - 18.299: 99.3894% ( 2) 00:14:26.282 18.299 - 18.394: 99.3968% ( 1) 00:14:26.282 18.394 - 18.489: 99.4041% ( 1) 00:14:26.282 3980.705 - 4004.978: 99.8823% ( 65) 00:14:26.282 4004.978 - 4029.250: 100.0000% ( 16) 00:14:26.282 00:14:26.282 12:09:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:26.282 12:09:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:26.282 12:09:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:26.282 12:09:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:26.282 12:09:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.539 [ 00:14:26.539 { 00:14:26.539 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.539 "subtype": "Discovery", 00:14:26.539 "listen_addresses": [], 00:14:26.539 "allow_any_host": true, 00:14:26.539 "hosts": [] 00:14:26.539 }, 00:14:26.539 { 00:14:26.539 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.539 "subtype": "NVMe", 00:14:26.539 "listen_addresses": [ 00:14:26.539 { 00:14:26.539 "trtype": "VFIOUSER", 00:14:26.539 "adrfam": "IPv4", 00:14:26.539 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.539 "trsvcid": "0" 00:14:26.539 } 00:14:26.539 ], 00:14:26.539 "allow_any_host": true, 00:14:26.539 "hosts": [], 00:14:26.539 "serial_number": "SPDK1", 00:14:26.539 "model_number": "SPDK bdev Controller", 00:14:26.539 "max_namespaces": 32, 00:14:26.539 "min_cntlid": 1, 00:14:26.539 "max_cntlid": 65519, 00:14:26.539 "namespaces": [ 00:14:26.539 { 00:14:26.539 "nsid": 1, 00:14:26.539 "bdev_name": "Malloc1", 00:14:26.539 "name": "Malloc1", 00:14:26.539 "nguid": "2AEC63D427584F9DB4D41F548960B092", 00:14:26.539 "uuid": "2aec63d4-2758-4f9d-b4d4-1f548960b092" 00:14:26.539 }, 00:14:26.539 { 00:14:26.539 "nsid": 2, 00:14:26.539 "bdev_name": "Malloc3", 00:14:26.539 "name": "Malloc3", 00:14:26.539 "nguid": "7BA9D3369E21455C95F0CC2F8F918381", 00:14:26.539 "uuid": "7ba9d336-9e21-455c-95f0-cc2f8f918381" 00:14:26.539 } 00:14:26.539 ] 00:14:26.539 }, 00:14:26.539 { 00:14:26.539 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.539 "subtype": "NVMe", 00:14:26.539 "listen_addresses": [ 00:14:26.539 { 00:14:26.539 "trtype": "VFIOUSER", 00:14:26.539 "adrfam": "IPv4", 00:14:26.539 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.539 "trsvcid": "0" 00:14:26.539 } 00:14:26.539 ], 00:14:26.539 "allow_any_host": true, 00:14:26.539 "hosts": [], 00:14:26.539 "serial_number": "SPDK2", 00:14:26.539 "model_number": "SPDK bdev Controller", 00:14:26.539 "max_namespaces": 32, 00:14:26.539 "min_cntlid": 1, 00:14:26.539 "max_cntlid": 65519, 00:14:26.539 "namespaces": [ 00:14:26.539 { 00:14:26.539 "nsid": 1, 00:14:26.539 "bdev_name": "Malloc2", 00:14:26.539 "name": "Malloc2", 00:14:26.539 "nguid": "04904C8662854011880DBCC25486F673", 00:14:26.539 "uuid": "04904c86-6285-4011-880d-bcc25486f673" 00:14:26.539 } 00:14:26.539 ] 00:14:26.539 } 00:14:26.539 ] 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=955572 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:26.539 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:26.539 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.539 [2024-07-22 12:09:34.379081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.796 Malloc4 00:14:26.796 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:27.051 [2024-07-22 12:09:34.728634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.051 12:09:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:27.051 Asynchronous Event Request test 00:14:27.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.051 Registering asynchronous event callbacks... 00:14:27.051 Starting namespace attribute notice tests for all controllers... 00:14:27.051 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:27.051 aer_cb - Changed Namespace 00:14:27.051 Cleaning up... 00:14:27.307 [ 00:14:27.307 { 00:14:27.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.307 "subtype": "Discovery", 00:14:27.307 "listen_addresses": [], 00:14:27.307 "allow_any_host": true, 00:14:27.307 "hosts": [] 00:14:27.307 }, 00:14:27.307 { 00:14:27.307 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.307 "subtype": "NVMe", 00:14:27.307 "listen_addresses": [ 00:14:27.307 { 00:14:27.307 "trtype": "VFIOUSER", 00:14:27.307 "adrfam": "IPv4", 00:14:27.307 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.307 "trsvcid": "0" 00:14:27.307 } 00:14:27.307 ], 00:14:27.307 "allow_any_host": true, 00:14:27.307 "hosts": [], 00:14:27.307 "serial_number": "SPDK1", 00:14:27.307 "model_number": "SPDK bdev Controller", 00:14:27.307 "max_namespaces": 32, 00:14:27.307 "min_cntlid": 1, 00:14:27.307 "max_cntlid": 65519, 00:14:27.307 "namespaces": [ 00:14:27.307 { 00:14:27.307 "nsid": 1, 00:14:27.307 "bdev_name": "Malloc1", 00:14:27.307 "name": "Malloc1", 00:14:27.307 "nguid": "2AEC63D427584F9DB4D41F548960B092", 00:14:27.307 "uuid": "2aec63d4-2758-4f9d-b4d4-1f548960b092" 00:14:27.307 }, 00:14:27.307 { 00:14:27.307 "nsid": 2, 00:14:27.307 "bdev_name": "Malloc3", 00:14:27.307 "name": "Malloc3", 00:14:27.307 "nguid": "7BA9D3369E21455C95F0CC2F8F918381", 00:14:27.307 "uuid": "7ba9d336-9e21-455c-95f0-cc2f8f918381" 00:14:27.307 } 00:14:27.307 ] 00:14:27.307 }, 00:14:27.307 { 00:14:27.307 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.307 "subtype": "NVMe", 00:14:27.307 "listen_addresses": [ 00:14:27.307 { 00:14:27.307 "trtype": "VFIOUSER", 00:14:27.307 "adrfam": "IPv4", 00:14:27.307 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.307 "trsvcid": "0" 00:14:27.307 } 00:14:27.307 ], 00:14:27.307 "allow_any_host": true, 00:14:27.307 "hosts": [], 00:14:27.307 "serial_number": "SPDK2", 00:14:27.307 "model_number": "SPDK bdev Controller", 00:14:27.307 "max_namespaces": 32, 00:14:27.307 "min_cntlid": 1, 00:14:27.307 "max_cntlid": 65519, 00:14:27.307 "namespaces": [ 00:14:27.307 { 00:14:27.307 "nsid": 1, 00:14:27.307 "bdev_name": "Malloc2", 00:14:27.307 "name": "Malloc2", 00:14:27.307 "nguid": "04904C8662854011880DBCC25486F673", 00:14:27.307 "uuid": "04904c86-6285-4011-880d-bcc25486f673" 00:14:27.307 }, 00:14:27.307 { 00:14:27.307 "nsid": 2, 00:14:27.307 "bdev_name": "Malloc4", 00:14:27.307 "name": "Malloc4", 00:14:27.307 "nguid": "5842D0D0EFF44AFDBA287EBAC4E29BE8", 00:14:27.307 "uuid": "5842d0d0-eff4-4afd-ba28-7ebac4e29be8" 00:14:27.307 } 00:14:27.307 ] 00:14:27.307 } 00:14:27.307 ] 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 955572 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 949364 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 949364 ']' 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 949364 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949364 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949364' 00:14:27.307 killing process with pid 949364 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 949364 00:14:27.307 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 949364 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=955712 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 955712' 00:14:27.564 Process pid: 955712 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 955712 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 955712 ']' 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.564 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:27.564 [2024-07-22 12:09:35.418970] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:27.564 [2024-07-22 12:09:35.419983] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:14:27.564 [2024-07-22 12:09:35.420045] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.564 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.564 [2024-07-22 12:09:35.451273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:27.564 [2024-07-22 12:09:35.481757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.822 [2024-07-22 12:09:35.570600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.822 [2024-07-22 12:09:35.570666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.822 [2024-07-22 12:09:35.570683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.822 [2024-07-22 12:09:35.570697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.822 [2024-07-22 12:09:35.570708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.822 [2024-07-22 12:09:35.570803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.822 [2024-07-22 12:09:35.570856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.822 [2024-07-22 12:09:35.570969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.822 [2024-07-22 12:09:35.570971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.822 [2024-07-22 12:09:35.678507] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:27.822 [2024-07-22 12:09:35.678735] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:27.822 [2024-07-22 12:09:35.679046] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:27.822 [2024-07-22 12:09:35.679659] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:27.822 [2024-07-22 12:09:35.679892] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:27.822 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.822 12:09:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:27.822 12:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:29.192 12:09:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:29.192 12:09:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:29.192 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:29.192 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.192 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:29.192 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:29.450 Malloc1 00:14:29.450 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:29.709 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:29.967 12:09:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:30.225 12:09:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.225 12:09:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:30.225 12:09:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:30.483 Malloc2 00:14:30.484 12:09:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:31.049 12:09:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:31.049 12:09:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:31.307 12:09:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 955712 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 955712 ']' 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 955712 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 955712 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 955712' 00:14:31.308 killing process with pid 955712 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 955712 00:14:31.308 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 955712 00:14:31.567 12:09:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:31.567 12:09:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:31.567 00:14:31.567 real 0m52.635s 00:14:31.567 user 3m27.753s 00:14:31.567 sys 0m4.509s 00:14:31.567 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.567 12:09:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:31.567 ************************************ 00:14:31.567 END TEST nvmf_vfio_user 00:14:31.567 ************************************ 00:14:31.567 12:09:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.567 12:09:39 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:31.567 12:09:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.567 12:09:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.567 12:09:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.826 ************************************ 00:14:31.826 START TEST nvmf_vfio_user_nvme_compliance 00:14:31.826 ************************************ 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:31.826 * Looking for test storage... 00:14:31.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=956305 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 956305' 00:14:31.826 Process pid: 956305 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 956305 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 956305 ']' 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.826 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.826 [2024-07-22 12:09:39.641109] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:14:31.826 [2024-07-22 12:09:39.641199] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.826 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.826 [2024-07-22 12:09:39.677428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:31.826 [2024-07-22 12:09:39.708159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.085 [2024-07-22 12:09:39.801275] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.085 [2024-07-22 12:09:39.801344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.085 [2024-07-22 12:09:39.801361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.085 [2024-07-22 12:09:39.801374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.085 [2024-07-22 12:09:39.801386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.085 [2024-07-22 12:09:39.801504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.085 [2024-07-22 12:09:39.801446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.085 [2024-07-22 12:09:39.801500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.085 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.085 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:32.085 12:09:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:33.080 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:33.080 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:33.080 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 malloc0 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.081 12:09:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:33.338 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.338 00:14:33.338 00:14:33.338 CUnit - A unit testing framework for C - Version 2.1-3 00:14:33.338 http://cunit.sourceforge.net/ 00:14:33.338 00:14:33.338 00:14:33.338 Suite: nvme_compliance 00:14:33.338 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-22 12:09:41.147386] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.338 [2024-07-22 12:09:41.148909] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:33.338 [2024-07-22 12:09:41.148934] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:33.338 [2024-07-22 12:09:41.148947] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:33.338 [2024-07-22 12:09:41.150404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.338 passed 00:14:33.338 Test: admin_identify_ctrlr_verify_fused ...[2024-07-22 12:09:41.237027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.338 [2024-07-22 12:09:41.240047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.596 passed 00:14:33.596 Test: admin_identify_ns ...[2024-07-22 12:09:41.328145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.596 [2024-07-22 12:09:41.387631] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:33.596 [2024-07-22 12:09:41.395629] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:33.596 [2024-07-22 12:09:41.416759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.596 passed 00:14:33.596 Test: admin_get_features_mandatory_features ...[2024-07-22 12:09:41.500265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.596 [2024-07-22 12:09:41.503288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.853 passed 00:14:33.853 Test: admin_get_features_optional_features ...[2024-07-22 12:09:41.588817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.853 [2024-07-22 12:09:41.591842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.853 passed 00:14:33.853 Test: admin_set_features_number_of_queues ...[2024-07-22 12:09:41.674189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.853 [2024-07-22 12:09:41.779717] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.111 passed 00:14:34.111 Test: admin_get_log_page_mandatory_logs ...[2024-07-22 12:09:41.863476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.111 [2024-07-22 12:09:41.866509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.111 passed 00:14:34.111 Test: admin_get_log_page_with_lpo ...[2024-07-22 12:09:41.948114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.111 [2024-07-22 12:09:42.015632] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:34.111 [2024-07-22 12:09:42.028715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.367 passed 00:14:34.367 Test: fabric_property_get ...[2024-07-22 12:09:42.112520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.367 [2024-07-22 12:09:42.113815] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:34.367 [2024-07-22 12:09:42.115538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.367 passed 00:14:34.367 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-22 12:09:42.200092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.367 [2024-07-22 12:09:42.201397] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:34.367 [2024-07-22 12:09:42.203115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.367 passed 00:14:34.367 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-22 12:09:42.286109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.624 [2024-07-22 12:09:42.369623] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:34.624 [2024-07-22 12:09:42.385637] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:34.624 [2024-07-22 12:09:42.390710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.624 passed 00:14:34.624 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-22 12:09:42.473184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.624 [2024-07-22 12:09:42.474481] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:34.624 [2024-07-22 12:09:42.476202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.624 passed 00:14:34.880 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-22 12:09:42.559565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.880 [2024-07-22 12:09:42.636625] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:34.880 [2024-07-22 12:09:42.660624] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:34.880 [2024-07-22 12:09:42.665736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.880 passed 00:14:34.880 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-22 12:09:42.748214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.880 [2024-07-22 12:09:42.749509] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:34.880 [2024-07-22 12:09:42.749560] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:34.880 [2024-07-22 12:09:42.751236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.880 passed 00:14:35.136 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-22 12:09:42.835505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.136 [2024-07-22 12:09:42.922628] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:35.136 [2024-07-22 12:09:42.931625] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:35.136 [2024-07-22 12:09:42.939627] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:35.136 [2024-07-22 12:09:42.947636] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:35.136 [2024-07-22 12:09:42.976735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.136 passed 00:14:35.136 Test: admin_create_io_sq_verify_pc ...[2024-07-22 12:09:43.060248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.391 [2024-07-22 12:09:43.076638] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:35.391 [2024-07-22 12:09:43.094563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.391 passed 00:14:35.391 Test: admin_create_io_qp_max_qps ...[2024-07-22 12:09:43.175119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.758 [2024-07-22 12:09:44.291630] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:36.758 [2024-07-22 12:09:44.678213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.015 passed 00:14:37.015 Test: admin_create_io_sq_shared_cq ...[2024-07-22 12:09:44.763113] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.015 [2024-07-22 12:09:44.898622] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:37.015 [2024-07-22 12:09:44.935716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.272 passed 00:14:37.272 00:14:37.272 Run Summary: Type Total Ran Passed Failed Inactive 00:14:37.272 suites 1 1 n/a 0 0 00:14:37.272 tests 18 18 18 0 0 00:14:37.272 asserts 360 360 360 0 n/a 00:14:37.272 00:14:37.272 Elapsed time = 1.571 seconds 00:14:37.272 12:09:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 956305 00:14:37.272 12:09:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 956305 ']' 00:14:37.272 12:09:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 956305 00:14:37.272 12:09:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:37.272 12:09:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.272 12:09:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 956305 00:14:37.272 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:37.272 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:37.272 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 956305' 00:14:37.272 killing process with pid 956305 00:14:37.272 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 956305 00:14:37.272 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 956305 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:37.530 00:14:37.530 real 0m5.752s 00:14:37.530 user 0m16.123s 00:14:37.530 sys 0m0.566s 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:37.530 ************************************ 00:14:37.530 END TEST nvmf_vfio_user_nvme_compliance 00:14:37.530 ************************************ 00:14:37.530 12:09:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:37.530 12:09:45 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:37.530 12:09:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.530 12:09:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.530 12:09:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.530 ************************************ 00:14:37.530 START TEST nvmf_vfio_user_fuzz 00:14:37.530 ************************************ 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:37.530 * Looking for test storage... 00:14:37.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=957032 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 957032' 00:14:37.530 Process pid: 957032 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 957032 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 957032 ']' 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.530 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.788 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.788 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:37.788 12:09:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 malloc0 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:39.162 12:09:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:11.209 Fuzzing completed. Shutting down the fuzz application 00:15:11.209 00:15:11.209 Dumping successful admin opcodes: 00:15:11.209 8, 9, 10, 24, 00:15:11.209 Dumping successful io opcodes: 00:15:11.209 0, 00:15:11.209 NS: 0x200003a1ef00 I/O qp, Total commands completed: 579551, total successful commands: 2225, random_seed: 3427128320 00:15:11.209 NS: 0x200003a1ef00 admin qp, Total commands completed: 102739, total successful commands: 848, random_seed: 4204886400 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 957032 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 957032 ']' 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 957032 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 957032 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 957032' 00:15:11.209 killing process with pid 957032 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 957032 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 957032 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:11.209 00:15:11.209 real 0m32.227s 00:15:11.209 user 0m31.144s 00:15:11.209 sys 0m28.870s 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.209 12:10:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.209 ************************************ 00:15:11.209 END TEST nvmf_vfio_user_fuzz 00:15:11.209 ************************************ 00:15:11.209 12:10:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:11.209 12:10:17 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:11.209 12:10:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.209 12:10:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.209 12:10:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.209 ************************************ 00:15:11.209 START TEST nvmf_host_management 00:15:11.209 ************************************ 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:11.209 * Looking for test storage... 00:15:11.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.209 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.210 12:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:11.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:11.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:11.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.777 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:11.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.778 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.036 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.036 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.036 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:15:12.037 00:15:12.037 --- 10.0.0.2 ping statistics --- 00:15:12.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.037 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:15:12.037 00:15:12.037 --- 10.0.0.1 ping statistics --- 00:15:12.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.037 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=962446 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 962446 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 962446 ']' 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.037 12:10:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.037 [2024-07-22 12:10:19.813673] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:12.037 [2024-07-22 12:10:19.813749] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.037 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.037 [2024-07-22 12:10:19.855537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:12.037 [2024-07-22 12:10:19.886123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.296 [2024-07-22 12:10:19.979258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.296 [2024-07-22 12:10:19.979313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.296 [2024-07-22 12:10:19.979339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.296 [2024-07-22 12:10:19.979352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.296 [2024-07-22 12:10:19.979363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.296 [2024-07-22 12:10:19.979474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.296 [2024-07-22 12:10:19.979578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.296 [2024-07-22 12:10:19.979643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:12.296 [2024-07-22 12:10:19.979646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.296 [2024-07-22 12:10:20.143636] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.296 Malloc0 00:15:12.296 [2024-07-22 12:10:20.205573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.296 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=962528 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 962528 /var/tmp/bdevperf.sock 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 962528 ']' 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:12.555 { 00:15:12.555 "params": { 00:15:12.555 "name": "Nvme$subsystem", 00:15:12.555 "trtype": "$TEST_TRANSPORT", 00:15:12.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:12.555 "adrfam": "ipv4", 00:15:12.555 "trsvcid": "$NVMF_PORT", 00:15:12.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:12.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:12.555 "hdgst": ${hdgst:-false}, 00:15:12.555 "ddgst": ${ddgst:-false} 00:15:12.555 }, 00:15:12.555 "method": "bdev_nvme_attach_controller" 00:15:12.555 } 00:15:12.555 EOF 00:15:12.555 )") 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:12.555 12:10:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:12.555 "params": { 00:15:12.555 "name": "Nvme0", 00:15:12.555 "trtype": "tcp", 00:15:12.555 "traddr": "10.0.0.2", 00:15:12.555 "adrfam": "ipv4", 00:15:12.555 "trsvcid": "4420", 00:15:12.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:12.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:12.555 "hdgst": false, 00:15:12.555 "ddgst": false 00:15:12.555 }, 00:15:12.555 "method": "bdev_nvme_attach_controller" 00:15:12.555 }' 00:15:12.555 [2024-07-22 12:10:20.287961] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:12.556 [2024-07-22 12:10:20.288065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962528 ] 00:15:12.556 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.556 [2024-07-22 12:10:20.321895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:12.556 [2024-07-22 12:10:20.351008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.556 [2024-07-22 12:10:20.437794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.814 Running I/O for 10 seconds... 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:12.814 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:13.072 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:13.072 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:13.072 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:13.072 12:10:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:13.072 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.072 12:10:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.072 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.339 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.340 [2024-07-22 12:10:21.028319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.028485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2ad0 is same with the state(5) to be set 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.340 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.340 [2024-07-22 12:10:21.034274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.340 [2024-07-22 12:10:21.034319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.340 [2024-07-22 12:10:21.034351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.340 [2024-07-22 12:10:21.034377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.340 [2024-07-22 12:10:21.034404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aab50 is same with the state(5) to be set 00:15:13.340 [2024-07-22 12:10:21.034474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.034967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.034983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.340 [2024-07-22 12:10:21.035461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.340 [2024-07-22 12:10:21.035476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.035970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.341 [2024-07-22 12:10:21.036637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.341 [2024-07-22 12:10:21.036732] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24bbd80 was disconnected and freed. reset controller. 00:15:13.341 [2024-07-22 12:10:21.037848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:13.341 task offset: 73728 on job bdev=Nvme0n1 fails 00:15:13.341 00:15:13.341 Latency(us) 00:15:13.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.341 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:13.341 Job: Nvme0n1 ended in about 0.39 seconds with error 00:15:13.341 Verification LBA range: start 0x0 length 0x400 00:15:13.341 Nvme0n1 : 0.39 1495.41 93.46 166.16 0.00 37387.10 3070.48 33981.63 00:15:13.341 =================================================================================================================== 00:15:13.341 Total : 1495.41 93.46 166.16 0.00 37387.10 3070.48 33981.63 00:15:13.341 [2024-07-22 12:10:21.039726] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:13.341 [2024-07-22 12:10:21.039757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aab50 (9): Bad file descriptor 00:15:13.341 12:10:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.341 12:10:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:13.341 [2024-07-22 12:10:21.141791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 962528 00:15:14.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (962528) - No such process 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:14.273 { 00:15:14.273 "params": { 00:15:14.273 "name": "Nvme$subsystem", 00:15:14.273 "trtype": "$TEST_TRANSPORT", 00:15:14.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.273 "adrfam": "ipv4", 00:15:14.273 "trsvcid": "$NVMF_PORT", 00:15:14.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.273 "hdgst": ${hdgst:-false}, 00:15:14.273 "ddgst": ${ddgst:-false} 00:15:14.273 }, 00:15:14.273 "method": "bdev_nvme_attach_controller" 00:15:14.273 } 00:15:14.273 EOF 00:15:14.273 )") 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:14.273 12:10:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:14.273 "params": { 00:15:14.273 "name": "Nvme0", 00:15:14.273 "trtype": "tcp", 00:15:14.273 "traddr": "10.0.0.2", 00:15:14.273 "adrfam": "ipv4", 00:15:14.273 "trsvcid": "4420", 00:15:14.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:14.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:14.273 "hdgst": false, 00:15:14.273 "ddgst": false 00:15:14.273 }, 00:15:14.273 "method": "bdev_nvme_attach_controller" 00:15:14.273 }' 00:15:14.273 [2024-07-22 12:10:22.086690] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:14.273 [2024-07-22 12:10:22.086774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962806 ] 00:15:14.273 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.273 [2024-07-22 12:10:22.120263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:14.273 [2024-07-22 12:10:22.148946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.530 [2024-07-22 12:10:22.234279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.530 Running I/O for 1 seconds... 00:15:15.929 00:15:15.929 Latency(us) 00:15:15.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.929 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.929 Verification LBA range: start 0x0 length 0x400 00:15:15.929 Nvme0n1 : 1.01 1580.05 98.75 0.00 0.00 39853.22 8349.77 33399.09 00:15:15.929 =================================================================================================================== 00:15:15.929 Total : 1580.05 98.75 0.00 0.00 39853.22 8349.77 33399.09 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.929 rmmod nvme_tcp 00:15:15.929 rmmod nvme_fabrics 00:15:15.929 rmmod nvme_keyring 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 962446 ']' 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 962446 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 962446 ']' 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 962446 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962446 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962446' 00:15:15.929 killing process with pid 962446 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 962446 00:15:15.929 12:10:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 962446 00:15:16.186 [2024-07-22 12:10:23.992626] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.186 12:10:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.712 12:10:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.712 12:10:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:18.712 00:15:18.712 real 0m8.470s 00:15:18.712 user 0m18.862s 00:15:18.712 sys 0m2.667s 00:15:18.712 12:10:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.712 12:10:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:18.712 ************************************ 00:15:18.712 END TEST nvmf_host_management 00:15:18.712 ************************************ 00:15:18.712 12:10:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:18.712 12:10:26 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:18.712 12:10:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.712 12:10:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.712 12:10:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.712 ************************************ 00:15:18.712 START TEST nvmf_lvol 00:15:18.712 ************************************ 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:18.712 * Looking for test storage... 00:15:18.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:18.712 12:10:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.084 12:10:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:20.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:20.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:20.084 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:20.085 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:20.085 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.085 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:20.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:15:20.341 00:15:20.341 --- 10.0.0.2 ping statistics --- 00:15:20.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.341 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:15:20.341 00:15:20.341 --- 10.0.0.1 ping statistics --- 00:15:20.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.341 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=964885 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 964885 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 964885 ']' 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.341 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:20.341 [2024-07-22 12:10:28.210834] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:20.341 [2024-07-22 12:10:28.210920] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.341 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.341 [2024-07-22 12:10:28.248444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:20.597 [2024-07-22 12:10:28.274877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.597 [2024-07-22 12:10:28.363291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.597 [2024-07-22 12:10:28.363355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.597 [2024-07-22 12:10:28.363382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.597 [2024-07-22 12:10:28.363396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.597 [2024-07-22 12:10:28.363407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.597 [2024-07-22 12:10:28.363500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.597 [2024-07-22 12:10:28.363576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.597 [2024-07-22 12:10:28.363579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.597 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.597 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:20.597 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.598 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.598 12:10:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:20.598 12:10:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.598 12:10:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:20.854 [2024-07-22 12:10:28.725142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.854 12:10:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.111 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:21.111 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.368 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:21.368 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:21.625 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:21.881 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a59b2292-58dc-4a3d-83b5-1d0fd19dbcdd 00:15:21.881 12:10:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a59b2292-58dc-4a3d-83b5-1d0fd19dbcdd lvol 20 00:15:22.445 12:10:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f54f2373-de5b-4b82-9d20-c5e096fbd8ca 00:15:22.445 12:10:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:22.445 12:10:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f54f2373-de5b-4b82-9d20-c5e096fbd8ca 00:15:22.702 12:10:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:22.958 [2024-07-22 12:10:30.769177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.958 12:10:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.215 12:10:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=965303 00:15:23.215 12:10:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:23.215 12:10:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:23.215 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.147 12:10:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f54f2373-de5b-4b82-9d20-c5e096fbd8ca MY_SNAPSHOT 00:15:24.712 12:10:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5954a44c-bff8-4c5f-a5f8-e3d7279af7ef 00:15:24.712 12:10:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f54f2373-de5b-4b82-9d20-c5e096fbd8ca 30 00:15:24.970 12:10:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5954a44c-bff8-4c5f-a5f8-e3d7279af7ef MY_CLONE 00:15:25.228 12:10:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0654b950-948c-4100-8eac-1dc60fa624c2 00:15:25.228 12:10:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0654b950-948c-4100-8eac-1dc60fa624c2 00:15:25.793 12:10:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 965303 00:15:33.902 Initializing NVMe Controllers 00:15:33.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:33.902 Controller IO queue size 128, less than required. 00:15:33.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:33.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:33.902 Initialization complete. Launching workers. 00:15:33.902 ======================================================== 00:15:33.903 Latency(us) 00:15:33.903 Device Information : IOPS MiB/s Average min max 00:15:33.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10260.90 40.08 12480.32 2269.41 84016.81 00:15:33.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10562.30 41.26 12123.08 2000.95 60013.63 00:15:33.903 ======================================================== 00:15:33.903 Total : 20823.20 81.34 12299.12 2000.95 84016.81 00:15:33.903 00:15:33.903 12:10:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:33.903 12:10:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f54f2373-de5b-4b82-9d20-c5e096fbd8ca 00:15:34.161 12:10:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a59b2292-58dc-4a3d-83b5-1d0fd19dbcdd 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.422 rmmod nvme_tcp 00:15:34.422 rmmod nvme_fabrics 00:15:34.422 rmmod nvme_keyring 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 964885 ']' 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 964885 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 964885 ']' 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 964885 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 964885 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 964885' 00:15:34.422 killing process with pid 964885 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 964885 00:15:34.422 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 964885 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.681 12:10:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.248 00:15:37.248 real 0m18.456s 00:15:37.248 user 1m3.694s 00:15:37.248 sys 0m5.397s 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:37.248 ************************************ 00:15:37.248 END TEST nvmf_lvol 00:15:37.248 ************************************ 00:15:37.248 12:10:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:37.248 12:10:44 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:37.248 12:10:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:37.248 12:10:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.248 12:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.248 ************************************ 00:15:37.248 START TEST nvmf_lvs_grow 00:15:37.248 ************************************ 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:37.248 * Looking for test storage... 00:15:37.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.248 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.249 12:10:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:38.622 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.622 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:38.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:38.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:38.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.623 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:15:38.881 00:15:38.881 --- 10.0.0.2 ping statistics --- 00:15:38.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.881 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:15:38.881 00:15:38.881 --- 10.0.0.1 ping statistics --- 00:15:38.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.881 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=968442 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 968442 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 968442 ']' 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.881 12:10:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:38.881 [2024-07-22 12:10:46.732549] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:38.881 [2024-07-22 12:10:46.732670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.881 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.881 [2024-07-22 12:10:46.770549] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:38.881 [2024-07-22 12:10:46.798007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.138 [2024-07-22 12:10:46.889477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.138 [2024-07-22 12:10:46.889541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.138 [2024-07-22 12:10:46.889568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.138 [2024-07-22 12:10:46.889581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.138 [2024-07-22 12:10:46.889600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.138 [2024-07-22 12:10:46.889651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.138 12:10:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:39.394 [2024-07-22 12:10:47.301176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.394 12:10:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:39.394 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:39.394 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.394 12:10:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:39.652 ************************************ 00:15:39.652 START TEST lvs_grow_clean 00:15:39.652 ************************************ 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.652 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:39.909 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:39.909 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:40.166 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6b563338-5979-43c7-a110-4335a0f65438 00:15:40.166 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:40.166 12:10:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:40.424 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:40.424 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:40.424 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6b563338-5979-43c7-a110-4335a0f65438 lvol 150 00:15:40.699 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d1b734a-6d94-446a-8de2-1f8d03b6cad1 00:15:40.699 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:40.699 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:40.955 [2024-07-22 12:10:48.647092] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:40.955 [2024-07-22 12:10:48.647190] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:40.955 true 00:15:40.955 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:40.955 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:41.212 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:41.212 12:10:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:41.468 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d1b734a-6d94-446a-8de2-1f8d03b6cad1 00:15:41.468 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:41.724 [2024-07-22 12:10:49.654234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.981 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=968884 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 968884 /var/tmp/bdevperf.sock 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 968884 ']' 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.238 12:10:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:42.238 [2024-07-22 12:10:49.961209] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:42.238 [2024-07-22 12:10:49.961283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968884 ] 00:15:42.238 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.238 [2024-07-22 12:10:49.993233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:42.238 [2024-07-22 12:10:50.024536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.238 [2024-07-22 12:10:50.115332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.494 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.494 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:42.495 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:42.750 Nvme0n1 00:15:42.750 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:43.007 [ 00:15:43.007 { 00:15:43.007 "name": "Nvme0n1", 00:15:43.007 "aliases": [ 00:15:43.007 "8d1b734a-6d94-446a-8de2-1f8d03b6cad1" 00:15:43.007 ], 00:15:43.007 "product_name": "NVMe disk", 00:15:43.007 "block_size": 4096, 00:15:43.007 "num_blocks": 38912, 00:15:43.007 "uuid": "8d1b734a-6d94-446a-8de2-1f8d03b6cad1", 00:15:43.007 "assigned_rate_limits": { 00:15:43.007 "rw_ios_per_sec": 0, 00:15:43.007 "rw_mbytes_per_sec": 0, 00:15:43.007 "r_mbytes_per_sec": 0, 00:15:43.007 "w_mbytes_per_sec": 0 00:15:43.007 }, 00:15:43.007 "claimed": false, 00:15:43.007 "zoned": false, 00:15:43.007 "supported_io_types": { 00:15:43.007 "read": true, 00:15:43.007 "write": true, 00:15:43.007 "unmap": true, 00:15:43.007 "flush": true, 00:15:43.007 "reset": true, 00:15:43.007 "nvme_admin": true, 00:15:43.007 "nvme_io": true, 00:15:43.007 "nvme_io_md": false, 00:15:43.007 "write_zeroes": true, 00:15:43.007 "zcopy": false, 00:15:43.007 "get_zone_info": false, 00:15:43.007 "zone_management": false, 00:15:43.007 "zone_append": false, 00:15:43.007 "compare": true, 00:15:43.007 "compare_and_write": true, 00:15:43.007 "abort": true, 00:15:43.007 "seek_hole": false, 00:15:43.007 "seek_data": false, 00:15:43.007 "copy": true, 00:15:43.007 "nvme_iov_md": false 00:15:43.007 }, 00:15:43.007 "memory_domains": [ 00:15:43.007 { 00:15:43.007 "dma_device_id": "system", 00:15:43.007 "dma_device_type": 1 00:15:43.007 } 00:15:43.007 ], 00:15:43.007 "driver_specific": { 00:15:43.007 "nvme": [ 00:15:43.007 { 00:15:43.007 "trid": { 00:15:43.007 "trtype": "TCP", 00:15:43.007 "adrfam": "IPv4", 00:15:43.007 "traddr": "10.0.0.2", 00:15:43.007 "trsvcid": "4420", 00:15:43.007 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:43.007 }, 00:15:43.007 "ctrlr_data": { 00:15:43.007 "cntlid": 1, 00:15:43.007 "vendor_id": "0x8086", 00:15:43.007 "model_number": "SPDK bdev Controller", 00:15:43.007 "serial_number": "SPDK0", 00:15:43.007 "firmware_revision": "24.09", 00:15:43.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:43.007 "oacs": { 00:15:43.007 "security": 0, 00:15:43.007 "format": 0, 00:15:43.007 "firmware": 0, 00:15:43.007 "ns_manage": 0 00:15:43.007 }, 00:15:43.007 "multi_ctrlr": true, 00:15:43.007 "ana_reporting": false 00:15:43.007 }, 00:15:43.007 "vs": { 00:15:43.007 "nvme_version": "1.3" 00:15:43.007 }, 00:15:43.007 "ns_data": { 00:15:43.007 "id": 1, 00:15:43.007 "can_share": true 00:15:43.007 } 00:15:43.007 } 00:15:43.007 ], 00:15:43.007 "mp_policy": "active_passive" 00:15:43.007 } 00:15:43.007 } 00:15:43.007 ] 00:15:43.007 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=969017 00:15:43.007 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:43.007 12:10:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:43.264 Running I/O for 10 seconds... 00:15:44.196 Latency(us) 00:15:44.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.196 Nvme0n1 : 1.00 14429.00 56.36 0.00 0.00 0.00 0.00 0.00 00:15:44.196 =================================================================================================================== 00:15:44.196 Total : 14429.00 56.36 0.00 0.00 0.00 0.00 0.00 00:15:44.196 00:15:45.131 12:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:45.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.131 Nvme0n1 : 2.00 14586.50 56.98 0.00 0.00 0.00 0.00 0.00 00:15:45.131 =================================================================================================================== 00:15:45.131 Total : 14586.50 56.98 0.00 0.00 0.00 0.00 0.00 00:15:45.131 00:15:45.388 true 00:15:45.388 12:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:45.388 12:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:45.644 12:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:45.644 12:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:45.644 12:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 969017 00:15:46.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.207 Nvme0n1 : 3.00 14677.33 57.33 0.00 0.00 0.00 0.00 0.00 00:15:46.207 =================================================================================================================== 00:15:46.207 Total : 14677.33 57.33 0.00 0.00 0.00 0.00 0.00 00:15:46.207 00:15:47.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.136 Nvme0n1 : 4.00 14739.00 57.57 0.00 0.00 0.00 0.00 0.00 00:15:47.136 =================================================================================================================== 00:15:47.137 Total : 14739.00 57.57 0.00 0.00 0.00 0.00 0.00 00:15:47.137 00:15:48.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.066 Nvme0n1 : 5.00 14852.20 58.02 0.00 0.00 0.00 0.00 0.00 00:15:48.066 =================================================================================================================== 00:15:48.066 Total : 14852.20 58.02 0.00 0.00 0.00 0.00 0.00 00:15:48.066 00:15:49.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.439 Nvme0n1 : 6.00 14948.83 58.39 0.00 0.00 0.00 0.00 0.00 00:15:49.439 =================================================================================================================== 00:15:49.439 Total : 14948.83 58.39 0.00 0.00 0.00 0.00 0.00 00:15:49.439 00:15:50.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.373 Nvme0n1 : 7.00 15063.43 58.84 0.00 0.00 0.00 0.00 0.00 00:15:50.373 =================================================================================================================== 00:15:50.373 Total : 15063.43 58.84 0.00 0.00 0.00 0.00 0.00 00:15:50.373 00:15:51.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.310 Nvme0n1 : 8.00 15085.88 58.93 0.00 0.00 0.00 0.00 0.00 00:15:51.310 =================================================================================================================== 00:15:51.310 Total : 15085.88 58.93 0.00 0.00 0.00 0.00 0.00 00:15:51.310 00:15:52.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.248 Nvme0n1 : 9.00 15154.78 59.20 0.00 0.00 0.00 0.00 0.00 00:15:52.248 =================================================================================================================== 00:15:52.248 Total : 15154.78 59.20 0.00 0.00 0.00 0.00 0.00 00:15:52.248 00:15:53.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.218 Nvme0n1 : 10.00 15189.00 59.33 0.00 0.00 0.00 0.00 0.00 00:15:53.218 =================================================================================================================== 00:15:53.218 Total : 15189.00 59.33 0.00 0.00 0.00 0.00 0.00 00:15:53.218 00:15:53.218 00:15:53.218 Latency(us) 00:15:53.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.218 Nvme0n1 : 10.01 15191.86 59.34 0.00 0.00 8420.99 3179.71 16699.54 00:15:53.218 =================================================================================================================== 00:15:53.218 Total : 15191.86 59.34 0.00 0.00 8420.99 3179.71 16699.54 00:15:53.218 0 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 968884 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 968884 ']' 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 968884 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 968884 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 968884' 00:15:53.218 killing process with pid 968884 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 968884 00:15:53.218 Received shutdown signal, test time was about 10.000000 seconds 00:15:53.218 00:15:53.218 Latency(us) 00:15:53.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.218 =================================================================================================================== 00:15:53.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.218 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 968884 00:15:53.476 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:53.734 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:53.990 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:53.990 12:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:54.248 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:54.248 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:54.248 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:54.505 [2024-07-22 12:11:02.351436] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:54.505 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:54.763 request: 00:15:54.763 { 00:15:54.763 "uuid": "6b563338-5979-43c7-a110-4335a0f65438", 00:15:54.763 "method": "bdev_lvol_get_lvstores", 00:15:54.763 "req_id": 1 00:15:54.763 } 00:15:54.763 Got JSON-RPC error response 00:15:54.763 response: 00:15:54.763 { 00:15:54.763 "code": -19, 00:15:54.763 "message": "No such device" 00:15:54.763 } 00:15:55.019 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:55.020 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.020 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:55.020 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.020 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:55.275 aio_bdev 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d1b734a-6d94-446a-8de2-1f8d03b6cad1 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=8d1b734a-6d94-446a-8de2-1f8d03b6cad1 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:55.275 12:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:55.530 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d1b734a-6d94-446a-8de2-1f8d03b6cad1 -t 2000 00:15:55.786 [ 00:15:55.786 { 00:15:55.786 "name": "8d1b734a-6d94-446a-8de2-1f8d03b6cad1", 00:15:55.786 "aliases": [ 00:15:55.786 "lvs/lvol" 00:15:55.786 ], 00:15:55.786 "product_name": "Logical Volume", 00:15:55.786 "block_size": 4096, 00:15:55.786 "num_blocks": 38912, 00:15:55.786 "uuid": "8d1b734a-6d94-446a-8de2-1f8d03b6cad1", 00:15:55.786 "assigned_rate_limits": { 00:15:55.786 "rw_ios_per_sec": 0, 00:15:55.786 "rw_mbytes_per_sec": 0, 00:15:55.786 "r_mbytes_per_sec": 0, 00:15:55.786 "w_mbytes_per_sec": 0 00:15:55.786 }, 00:15:55.786 "claimed": false, 00:15:55.786 "zoned": false, 00:15:55.786 "supported_io_types": { 00:15:55.786 "read": true, 00:15:55.786 "write": true, 00:15:55.786 "unmap": true, 00:15:55.786 "flush": false, 00:15:55.786 "reset": true, 00:15:55.786 "nvme_admin": false, 00:15:55.786 "nvme_io": false, 00:15:55.786 "nvme_io_md": false, 00:15:55.786 "write_zeroes": true, 00:15:55.786 "zcopy": false, 00:15:55.786 "get_zone_info": false, 00:15:55.786 "zone_management": false, 00:15:55.786 "zone_append": false, 00:15:55.786 "compare": false, 00:15:55.786 "compare_and_write": false, 00:15:55.786 "abort": false, 00:15:55.786 "seek_hole": true, 00:15:55.786 "seek_data": true, 00:15:55.786 "copy": false, 00:15:55.786 "nvme_iov_md": false 00:15:55.786 }, 00:15:55.786 "driver_specific": { 00:15:55.786 "lvol": { 00:15:55.786 "lvol_store_uuid": "6b563338-5979-43c7-a110-4335a0f65438", 00:15:55.786 "base_bdev": "aio_bdev", 00:15:55.786 "thin_provision": false, 00:15:55.786 "num_allocated_clusters": 38, 00:15:55.786 "snapshot": false, 00:15:55.786 "clone": false, 00:15:55.786 "esnap_clone": false 00:15:55.786 } 00:15:55.786 } 00:15:55.786 } 00:15:55.786 ] 00:15:55.786 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:55.786 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:55.787 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:56.043 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:56.043 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:56.043 12:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:56.299 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:56.299 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d1b734a-6d94-446a-8de2-1f8d03b6cad1 00:15:56.555 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b563338-5979-43c7-a110-4335a0f65438 00:15:56.812 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:57.069 00:15:57.069 real 0m17.533s 00:15:57.069 user 0m16.778s 00:15:57.069 sys 0m2.056s 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:57.069 ************************************ 00:15:57.069 END TEST lvs_grow_clean 00:15:57.069 ************************************ 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:57.069 ************************************ 00:15:57.069 START TEST lvs_grow_dirty 00:15:57.069 ************************************ 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:57.069 12:11:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:57.326 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:57.327 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:57.583 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0736f16d-96ba-49aa-be82-f6853ffe0347 00:15:57.583 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:15:57.583 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:57.840 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:57.840 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:57.840 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0736f16d-96ba-49aa-be82-f6853ffe0347 lvol 150 00:15:58.096 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=df140ab1-9804-4707-802e-78a37d561a0a 00:15:58.096 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:58.096 12:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:58.354 [2024-07-22 12:11:06.191873] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:58.354 [2024-07-22 12:11:06.191992] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:58.354 true 00:15:58.354 12:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:15:58.354 12:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:58.611 12:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:58.611 12:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:58.868 12:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df140ab1-9804-4707-802e-78a37d561a0a 00:15:59.126 12:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:59.383 [2024-07-22 12:11:07.239062] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.383 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=971046 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 971046 /var/tmp/bdevperf.sock 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 971046 ']' 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.641 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:59.899 [2024-07-22 12:11:07.610818] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:15:59.899 [2024-07-22 12:11:07.610910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971046 ] 00:15:59.899 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.899 [2024-07-22 12:11:07.642232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:59.899 [2024-07-22 12:11:07.673961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.899 [2024-07-22 12:11:07.763981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.157 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.157 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:00.157 12:11:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:00.414 Nvme0n1 00:16:00.414 12:11:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:00.672 [ 00:16:00.672 { 00:16:00.672 "name": "Nvme0n1", 00:16:00.672 "aliases": [ 00:16:00.672 "df140ab1-9804-4707-802e-78a37d561a0a" 00:16:00.672 ], 00:16:00.672 "product_name": "NVMe disk", 00:16:00.672 "block_size": 4096, 00:16:00.672 "num_blocks": 38912, 00:16:00.672 "uuid": "df140ab1-9804-4707-802e-78a37d561a0a", 00:16:00.672 "assigned_rate_limits": { 00:16:00.672 "rw_ios_per_sec": 0, 00:16:00.672 "rw_mbytes_per_sec": 0, 00:16:00.672 "r_mbytes_per_sec": 0, 00:16:00.672 "w_mbytes_per_sec": 0 00:16:00.672 }, 00:16:00.672 "claimed": false, 00:16:00.672 "zoned": false, 00:16:00.672 "supported_io_types": { 00:16:00.672 "read": true, 00:16:00.672 "write": true, 00:16:00.672 "unmap": true, 00:16:00.672 "flush": true, 00:16:00.672 "reset": true, 00:16:00.672 "nvme_admin": true, 00:16:00.672 "nvme_io": true, 00:16:00.672 "nvme_io_md": false, 00:16:00.672 "write_zeroes": true, 00:16:00.672 "zcopy": false, 00:16:00.672 "get_zone_info": false, 00:16:00.672 "zone_management": false, 00:16:00.672 "zone_append": false, 00:16:00.672 "compare": true, 00:16:00.672 "compare_and_write": true, 00:16:00.672 "abort": true, 00:16:00.672 "seek_hole": false, 00:16:00.672 "seek_data": false, 00:16:00.673 "copy": true, 00:16:00.673 "nvme_iov_md": false 00:16:00.673 }, 00:16:00.673 "memory_domains": [ 00:16:00.673 { 00:16:00.673 "dma_device_id": "system", 00:16:00.673 "dma_device_type": 1 00:16:00.673 } 00:16:00.673 ], 00:16:00.673 "driver_specific": { 00:16:00.673 "nvme": [ 00:16:00.673 { 00:16:00.673 "trid": { 00:16:00.673 "trtype": "TCP", 00:16:00.673 "adrfam": "IPv4", 00:16:00.673 "traddr": "10.0.0.2", 00:16:00.673 "trsvcid": "4420", 00:16:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:00.673 }, 00:16:00.673 "ctrlr_data": { 00:16:00.673 "cntlid": 1, 00:16:00.673 "vendor_id": "0x8086", 00:16:00.673 "model_number": "SPDK bdev Controller", 00:16:00.673 "serial_number": "SPDK0", 00:16:00.673 "firmware_revision": "24.09", 00:16:00.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:00.673 "oacs": { 00:16:00.673 "security": 0, 00:16:00.673 "format": 0, 00:16:00.673 "firmware": 0, 00:16:00.673 "ns_manage": 0 00:16:00.673 }, 00:16:00.673 "multi_ctrlr": true, 00:16:00.673 "ana_reporting": false 00:16:00.673 }, 00:16:00.673 "vs": { 00:16:00.673 "nvme_version": "1.3" 00:16:00.673 }, 00:16:00.673 "ns_data": { 00:16:00.673 "id": 1, 00:16:00.673 "can_share": true 00:16:00.673 } 00:16:00.673 } 00:16:00.673 ], 00:16:00.673 "mp_policy": "active_passive" 00:16:00.673 } 00:16:00.673 } 00:16:00.673 ] 00:16:00.673 12:11:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=971183 00:16:00.673 12:11:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:00.673 12:11:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:00.931 Running I/O for 10 seconds... 00:16:01.866 Latency(us) 00:16:01.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.866 Nvme0n1 : 1.00 14289.00 55.82 0.00 0.00 0.00 0.00 0.00 00:16:01.866 =================================================================================================================== 00:16:01.866 Total : 14289.00 55.82 0.00 0.00 0.00 0.00 0.00 00:16:01.866 00:16:02.801 12:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:02.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.801 Nvme0n1 : 2.00 14732.50 57.55 0.00 0.00 0.00 0.00 0.00 00:16:02.801 =================================================================================================================== 00:16:02.801 Total : 14732.50 57.55 0.00 0.00 0.00 0.00 0.00 00:16:02.801 00:16:03.059 true 00:16:03.059 12:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:03.059 12:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:03.317 12:11:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:03.317 12:11:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:03.317 12:11:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 971183 00:16:03.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.880 Nvme0n1 : 3.00 14712.67 57.47 0.00 0.00 0.00 0.00 0.00 00:16:03.880 =================================================================================================================== 00:16:03.880 Total : 14712.67 57.47 0.00 0.00 0.00 0.00 0.00 00:16:03.880 00:16:04.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.812 Nvme0n1 : 4.00 14863.00 58.06 0.00 0.00 0.00 0.00 0.00 00:16:04.812 =================================================================================================================== 00:16:04.812 Total : 14863.00 58.06 0.00 0.00 0.00 0.00 0.00 00:16:04.812 00:16:05.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.742 Nvme0n1 : 5.00 14926.00 58.30 0.00 0.00 0.00 0.00 0.00 00:16:05.742 =================================================================================================================== 00:16:05.742 Total : 14926.00 58.30 0.00 0.00 0.00 0.00 0.00 00:16:05.742 00:16:07.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.110 Nvme0n1 : 6.00 14958.17 58.43 0.00 0.00 0.00 0.00 0.00 00:16:07.110 =================================================================================================================== 00:16:07.110 Total : 14958.17 58.43 0.00 0.00 0.00 0.00 0.00 00:16:07.110 00:16:08.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.059 Nvme0n1 : 7.00 15062.43 58.84 0.00 0.00 0.00 0.00 0.00 00:16:08.059 =================================================================================================================== 00:16:08.059 Total : 15062.43 58.84 0.00 0.00 0.00 0.00 0.00 00:16:08.059 00:16:08.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.990 Nvme0n1 : 8.00 15061.00 58.83 0.00 0.00 0.00 0.00 0.00 00:16:08.990 =================================================================================================================== 00:16:08.990 Total : 15061.00 58.83 0.00 0.00 0.00 0.00 0.00 00:16:08.990 00:16:09.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.921 Nvme0n1 : 9.00 15124.00 59.08 0.00 0.00 0.00 0.00 0.00 00:16:09.921 =================================================================================================================== 00:16:09.921 Total : 15124.00 59.08 0.00 0.00 0.00 0.00 0.00 00:16:09.921 00:16:10.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.853 Nvme0n1 : 10.00 15154.80 59.20 0.00 0.00 0.00 0.00 0.00 00:16:10.853 =================================================================================================================== 00:16:10.853 Total : 15154.80 59.20 0.00 0.00 0.00 0.00 0.00 00:16:10.853 00:16:10.853 00:16:10.853 Latency(us) 00:16:10.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.853 Nvme0n1 : 10.01 15158.02 59.21 0.00 0.00 8439.68 2585.03 16117.00 00:16:10.853 =================================================================================================================== 00:16:10.853 Total : 15158.02 59.21 0.00 0.00 8439.68 2585.03 16117.00 00:16:10.853 0 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 971046 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 971046 ']' 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 971046 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 971046 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 971046' 00:16:10.853 killing process with pid 971046 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 971046 00:16:10.853 Received shutdown signal, test time was about 10.000000 seconds 00:16:10.853 00:16:10.853 Latency(us) 00:16:10.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.853 =================================================================================================================== 00:16:10.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.853 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 971046 00:16:11.111 12:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:11.368 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:11.626 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:11.626 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 968442 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 968442 00:16:11.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 968442 Killed "${NVMF_APP[@]}" "$@" 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=972405 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 972405 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 972405 ']' 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.884 12:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:12.142 [2024-07-22 12:11:19.845523] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:12.142 [2024-07-22 12:11:19.845598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.142 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.142 [2024-07-22 12:11:19.885551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:12.142 [2024-07-22 12:11:19.912544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.142 [2024-07-22 12:11:19.996768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.142 [2024-07-22 12:11:19.996825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.142 [2024-07-22 12:11:19.996848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.142 [2024-07-22 12:11:19.996859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.142 [2024-07-22 12:11:19.996870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.142 [2024-07-22 12:11:19.996896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.399 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:12.657 [2024-07-22 12:11:20.344985] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:12.657 [2024-07-22 12:11:20.345131] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:12.657 [2024-07-22 12:11:20.345189] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:12.657 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:12.657 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev df140ab1-9804-4707-802e-78a37d561a0a 00:16:12.657 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=df140ab1-9804-4707-802e-78a37d561a0a 00:16:12.658 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.658 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:12.658 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.658 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.658 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:12.915 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df140ab1-9804-4707-802e-78a37d561a0a -t 2000 00:16:12.915 [ 00:16:12.915 { 00:16:12.915 "name": "df140ab1-9804-4707-802e-78a37d561a0a", 00:16:12.915 "aliases": [ 00:16:12.915 "lvs/lvol" 00:16:12.915 ], 00:16:12.915 "product_name": "Logical Volume", 00:16:12.915 "block_size": 4096, 00:16:12.915 "num_blocks": 38912, 00:16:12.915 "uuid": "df140ab1-9804-4707-802e-78a37d561a0a", 00:16:12.915 "assigned_rate_limits": { 00:16:12.915 "rw_ios_per_sec": 0, 00:16:12.915 "rw_mbytes_per_sec": 0, 00:16:12.915 "r_mbytes_per_sec": 0, 00:16:12.915 "w_mbytes_per_sec": 0 00:16:12.915 }, 00:16:12.915 "claimed": false, 00:16:12.916 "zoned": false, 00:16:12.916 "supported_io_types": { 00:16:12.916 "read": true, 00:16:12.916 "write": true, 00:16:12.916 "unmap": true, 00:16:12.916 "flush": false, 00:16:12.916 "reset": true, 00:16:12.916 "nvme_admin": false, 00:16:12.916 "nvme_io": false, 00:16:12.916 "nvme_io_md": false, 00:16:12.916 "write_zeroes": true, 00:16:12.916 "zcopy": false, 00:16:12.916 "get_zone_info": false, 00:16:12.916 "zone_management": false, 00:16:12.916 "zone_append": false, 00:16:12.916 "compare": false, 00:16:12.916 "compare_and_write": false, 00:16:12.916 "abort": false, 00:16:12.916 "seek_hole": true, 00:16:12.916 "seek_data": true, 00:16:12.916 "copy": false, 00:16:12.916 "nvme_iov_md": false 00:16:12.916 }, 00:16:12.916 "driver_specific": { 00:16:12.916 "lvol": { 00:16:12.916 "lvol_store_uuid": "0736f16d-96ba-49aa-be82-f6853ffe0347", 00:16:12.916 "base_bdev": "aio_bdev", 00:16:12.916 "thin_provision": false, 00:16:12.916 "num_allocated_clusters": 38, 00:16:12.916 "snapshot": false, 00:16:12.916 "clone": false, 00:16:12.916 "esnap_clone": false 00:16:12.916 } 00:16:12.916 } 00:16:12.916 } 00:16:12.916 ] 00:16:13.173 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:13.173 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:13.173 12:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:13.173 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:13.430 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:13.430 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:13.430 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:13.430 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:13.688 [2024-07-22 12:11:21.598034] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:13.946 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:14.204 request: 00:16:14.204 { 00:16:14.204 "uuid": "0736f16d-96ba-49aa-be82-f6853ffe0347", 00:16:14.204 "method": "bdev_lvol_get_lvstores", 00:16:14.204 "req_id": 1 00:16:14.204 } 00:16:14.204 Got JSON-RPC error response 00:16:14.204 response: 00:16:14.204 { 00:16:14.204 "code": -19, 00:16:14.204 "message": "No such device" 00:16:14.204 } 00:16:14.204 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:14.204 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.204 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:14.204 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.204 12:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:14.460 aio_bdev 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df140ab1-9804-4707-802e-78a37d561a0a 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=df140ab1-9804-4707-802e-78a37d561a0a 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:14.461 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:14.717 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df140ab1-9804-4707-802e-78a37d561a0a -t 2000 00:16:14.975 [ 00:16:14.975 { 00:16:14.975 "name": "df140ab1-9804-4707-802e-78a37d561a0a", 00:16:14.975 "aliases": [ 00:16:14.975 "lvs/lvol" 00:16:14.975 ], 00:16:14.975 "product_name": "Logical Volume", 00:16:14.975 "block_size": 4096, 00:16:14.975 "num_blocks": 38912, 00:16:14.975 "uuid": "df140ab1-9804-4707-802e-78a37d561a0a", 00:16:14.975 "assigned_rate_limits": { 00:16:14.975 "rw_ios_per_sec": 0, 00:16:14.975 "rw_mbytes_per_sec": 0, 00:16:14.975 "r_mbytes_per_sec": 0, 00:16:14.975 "w_mbytes_per_sec": 0 00:16:14.975 }, 00:16:14.975 "claimed": false, 00:16:14.975 "zoned": false, 00:16:14.975 "supported_io_types": { 00:16:14.975 "read": true, 00:16:14.975 "write": true, 00:16:14.975 "unmap": true, 00:16:14.975 "flush": false, 00:16:14.975 "reset": true, 00:16:14.975 "nvme_admin": false, 00:16:14.975 "nvme_io": false, 00:16:14.975 "nvme_io_md": false, 00:16:14.975 "write_zeroes": true, 00:16:14.975 "zcopy": false, 00:16:14.975 "get_zone_info": false, 00:16:14.975 "zone_management": false, 00:16:14.975 "zone_append": false, 00:16:14.975 "compare": false, 00:16:14.975 "compare_and_write": false, 00:16:14.975 "abort": false, 00:16:14.975 "seek_hole": true, 00:16:14.975 "seek_data": true, 00:16:14.975 "copy": false, 00:16:14.975 "nvme_iov_md": false 00:16:14.975 }, 00:16:14.975 "driver_specific": { 00:16:14.975 "lvol": { 00:16:14.975 "lvol_store_uuid": "0736f16d-96ba-49aa-be82-f6853ffe0347", 00:16:14.975 "base_bdev": "aio_bdev", 00:16:14.975 "thin_provision": false, 00:16:14.975 "num_allocated_clusters": 38, 00:16:14.975 "snapshot": false, 00:16:14.975 "clone": false, 00:16:14.975 "esnap_clone": false 00:16:14.975 } 00:16:14.975 } 00:16:14.975 } 00:16:14.975 ] 00:16:14.975 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:14.975 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:14.975 12:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:15.236 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:15.236 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:15.236 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:15.492 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:15.492 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df140ab1-9804-4707-802e-78a37d561a0a 00:16:15.749 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0736f16d-96ba-49aa-be82-f6853ffe0347 00:16:16.006 12:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.263 00:16:16.263 real 0m19.125s 00:16:16.263 user 0m48.337s 00:16:16.263 sys 0m4.817s 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:16.263 ************************************ 00:16:16.263 END TEST lvs_grow_dirty 00:16:16.263 ************************************ 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:16.263 nvmf_trace.0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.263 rmmod nvme_tcp 00:16:16.263 rmmod nvme_fabrics 00:16:16.263 rmmod nvme_keyring 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 972405 ']' 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 972405 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 972405 ']' 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 972405 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.263 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972405 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972405' 00:16:16.520 killing process with pid 972405 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 972405 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 972405 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.520 12:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.050 12:11:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.050 00:16:19.050 real 0m41.839s 00:16:19.050 user 1m10.835s 00:16:19.050 sys 0m8.608s 00:16:19.050 12:11:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.050 12:11:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:19.050 ************************************ 00:16:19.050 END TEST nvmf_lvs_grow 00:16:19.050 ************************************ 00:16:19.050 12:11:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.050 12:11:26 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:19.050 12:11:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.050 12:11:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.050 12:11:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.050 ************************************ 00:16:19.050 START TEST nvmf_bdev_io_wait 00:16:19.050 ************************************ 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:19.050 * Looking for test storage... 00:16:19.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.050 12:11:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:20.954 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:20.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:20.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:20.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:20.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:16:20.955 00:16:20.955 --- 10.0.0.2 ping statistics --- 00:16:20.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.955 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:16:20.955 00:16:20.955 --- 10.0.0.1 ping statistics --- 00:16:20.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.955 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=974905 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 974905 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 974905 ']' 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.955 12:11:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:20.955 [2024-07-22 12:11:28.830239] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:20.956 [2024-07-22 12:11:28.830326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.956 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.956 [2024-07-22 12:11:28.875179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.213 [2024-07-22 12:11:28.906689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.213 [2024-07-22 12:11:29.000602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.213 [2024-07-22 12:11:29.000671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.213 [2024-07-22 12:11:29.000696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.213 [2024-07-22 12:11:29.000710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.213 [2024-07-22 12:11:29.000722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.213 [2024-07-22 12:11:29.000801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.213 [2024-07-22 12:11:29.000884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.213 [2024-07-22 12:11:29.000974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.213 [2024-07-22 12:11:29.000976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.213 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 [2024-07-22 12:11:29.149890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 Malloc0 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 [2024-07-22 12:11:29.211341] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=975048 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=975052 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.471 { 00:16:21.471 "params": { 00:16:21.471 "name": "Nvme$subsystem", 00:16:21.471 "trtype": "$TEST_TRANSPORT", 00:16:21.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.471 "adrfam": "ipv4", 00:16:21.471 "trsvcid": "$NVMF_PORT", 00:16:21.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.471 "hdgst": ${hdgst:-false}, 00:16:21.471 "ddgst": ${ddgst:-false} 00:16:21.471 }, 00:16:21.471 "method": "bdev_nvme_attach_controller" 00:16:21.471 } 00:16:21.471 EOF 00:16:21.471 )") 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=975054 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.471 { 00:16:21.471 "params": { 00:16:21.471 "name": "Nvme$subsystem", 00:16:21.471 "trtype": "$TEST_TRANSPORT", 00:16:21.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.471 "adrfam": "ipv4", 00:16:21.471 "trsvcid": "$NVMF_PORT", 00:16:21.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.471 "hdgst": ${hdgst:-false}, 00:16:21.471 "ddgst": ${ddgst:-false} 00:16:21.471 }, 00:16:21.471 "method": "bdev_nvme_attach_controller" 00:16:21.471 } 00:16:21.471 EOF 00:16:21.471 )") 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=975057 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.471 { 00:16:21.471 "params": { 00:16:21.471 "name": "Nvme$subsystem", 00:16:21.471 "trtype": "$TEST_TRANSPORT", 00:16:21.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.471 "adrfam": "ipv4", 00:16:21.471 "trsvcid": "$NVMF_PORT", 00:16:21.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.471 "hdgst": ${hdgst:-false}, 00:16:21.471 "ddgst": ${ddgst:-false} 00:16:21.471 }, 00:16:21.471 "method": "bdev_nvme_attach_controller" 00:16:21.471 } 00:16:21.471 EOF 00:16:21.471 )") 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.471 { 00:16:21.471 "params": { 00:16:21.471 "name": "Nvme$subsystem", 00:16:21.471 "trtype": "$TEST_TRANSPORT", 00:16:21.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.471 "adrfam": "ipv4", 00:16:21.471 "trsvcid": "$NVMF_PORT", 00:16:21.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.471 "hdgst": ${hdgst:-false}, 00:16:21.471 "ddgst": ${ddgst:-false} 00:16:21.471 }, 00:16:21.471 "method": "bdev_nvme_attach_controller" 00:16:21.471 } 00:16:21.471 EOF 00:16:21.471 )") 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 975048 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:21.471 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.472 "params": { 00:16:21.472 "name": "Nvme1", 00:16:21.472 "trtype": "tcp", 00:16:21.472 "traddr": "10.0.0.2", 00:16:21.472 "adrfam": "ipv4", 00:16:21.472 "trsvcid": "4420", 00:16:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.472 "hdgst": false, 00:16:21.472 "ddgst": false 00:16:21.472 }, 00:16:21.472 "method": "bdev_nvme_attach_controller" 00:16:21.472 }' 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.472 "params": { 00:16:21.472 "name": "Nvme1", 00:16:21.472 "trtype": "tcp", 00:16:21.472 "traddr": "10.0.0.2", 00:16:21.472 "adrfam": "ipv4", 00:16:21.472 "trsvcid": "4420", 00:16:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.472 "hdgst": false, 00:16:21.472 "ddgst": false 00:16:21.472 }, 00:16:21.472 "method": "bdev_nvme_attach_controller" 00:16:21.472 }' 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.472 "params": { 00:16:21.472 "name": "Nvme1", 00:16:21.472 "trtype": "tcp", 00:16:21.472 "traddr": "10.0.0.2", 00:16:21.472 "adrfam": "ipv4", 00:16:21.472 "trsvcid": "4420", 00:16:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.472 "hdgst": false, 00:16:21.472 "ddgst": false 00:16:21.472 }, 00:16:21.472 "method": "bdev_nvme_attach_controller" 00:16:21.472 }' 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:21.472 12:11:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.472 "params": { 00:16:21.472 "name": "Nvme1", 00:16:21.472 "trtype": "tcp", 00:16:21.472 "traddr": "10.0.0.2", 00:16:21.472 "adrfam": "ipv4", 00:16:21.472 "trsvcid": "4420", 00:16:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.472 "hdgst": false, 00:16:21.472 "ddgst": false 00:16:21.472 }, 00:16:21.472 "method": "bdev_nvme_attach_controller" 00:16:21.472 }' 00:16:21.472 [2024-07-22 12:11:29.258228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:21.472 [2024-07-22 12:11:29.258228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:21.472 [2024-07-22 12:11:29.258228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:21.472 [2024-07-22 12:11:29.258321] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-22 12:11:29.258322] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-22 12:11:29.258322] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:21.472 --proc-type=auto ] 00:16:21.472 --proc-type=auto ] 00:16:21.472 [2024-07-22 12:11:29.260043] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:21.472 [2024-07-22 12:11:29.260112] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:21.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.729 [2024-07-22 12:11:29.406939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.729 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.729 [2024-07-22 12:11:29.435208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.729 [2024-07-22 12:11:29.505280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.729 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.729 [2024-07-22 12:11:29.510025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.729 [2024-07-22 12:11:29.535103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.729 [2024-07-22 12:11:29.604787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.729 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.730 [2024-07-22 12:11:29.609810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:21.730 [2024-07-22 12:11:29.635152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.987 [2024-07-22 12:11:29.673014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.987 [2024-07-22 12:11:29.703001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.987 [2024-07-22 12:11:29.705304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:21.987 [2024-07-22 12:11:29.770572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:21.987 Running I/O for 1 seconds... 00:16:21.987 Running I/O for 1 seconds... 00:16:21.987 Running I/O for 1 seconds... 00:16:22.245 Running I/O for 1 seconds... 00:16:23.177 00:16:23.177 Latency(us) 00:16:23.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.177 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:23.177 Nvme1n1 : 1.02 5812.56 22.71 0.00 0.00 21846.09 8932.31 35923.44 00:16:23.177 =================================================================================================================== 00:16:23.177 Total : 5812.56 22.71 0.00 0.00 21846.09 8932.31 35923.44 00:16:23.177 00:16:23.177 Latency(us) 00:16:23.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.177 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:23.177 Nvme1n1 : 1.01 9342.62 36.49 0.00 0.00 13639.75 7621.59 27185.30 00:16:23.177 =================================================================================================================== 00:16:23.177 Total : 9342.62 36.49 0.00 0.00 13639.75 7621.59 27185.30 00:16:23.177 00:16:23.177 Latency(us) 00:16:23.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.177 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:23.177 Nvme1n1 : 1.01 5685.40 22.21 0.00 0.00 22433.55 6602.15 46215.02 00:16:23.177 =================================================================================================================== 00:16:23.177 Total : 5685.40 22.21 0.00 0.00 22433.55 6602.15 46215.02 00:16:23.177 00:16:23.177 Latency(us) 00:16:23.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.177 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:23.177 Nvme1n1 : 1.00 167326.94 653.62 0.00 0.00 762.02 276.10 995.18 00:16:23.177 =================================================================================================================== 00:16:23.177 Total : 167326.94 653.62 0.00 0.00 762.02 276.10 995.18 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 975052 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 975054 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 975057 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.448 rmmod nvme_tcp 00:16:23.448 rmmod nvme_fabrics 00:16:23.448 rmmod nvme_keyring 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 974905 ']' 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 974905 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 974905 ']' 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 974905 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 974905 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 974905' 00:16:23.448 killing process with pid 974905 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 974905 00:16:23.448 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 974905 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.723 12:11:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.248 12:11:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.248 00:16:26.248 real 0m7.106s 00:16:26.248 user 0m15.904s 00:16:26.248 sys 0m3.454s 00:16:26.248 12:11:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.248 12:11:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:26.248 ************************************ 00:16:26.248 END TEST nvmf_bdev_io_wait 00:16:26.248 ************************************ 00:16:26.248 12:11:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.248 12:11:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:26.248 12:11:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.248 12:11:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.248 12:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.248 ************************************ 00:16:26.248 START TEST nvmf_queue_depth 00:16:26.248 ************************************ 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:26.248 * Looking for test storage... 00:16:26.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.248 12:11:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.249 12:11:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.148 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:28.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:16:28.149 00:16:28.149 --- 10.0.0.2 ping statistics --- 00:16:28.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.149 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:16:28.149 00:16:28.149 --- 10.0.0.1 ping statistics --- 00:16:28.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.149 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=977272 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 977272 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 977272 ']' 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.149 12:11:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.149 [2024-07-22 12:11:36.003401] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:28.149 [2024-07-22 12:11:36.003471] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.149 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.149 [2024-07-22 12:11:36.041098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:28.149 [2024-07-22 12:11:36.066398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.406 [2024-07-22 12:11:36.151501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.406 [2024-07-22 12:11:36.151552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.406 [2024-07-22 12:11:36.151577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.406 [2024-07-22 12:11:36.151588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.406 [2024-07-22 12:11:36.151599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.406 [2024-07-22 12:11:36.151629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.406 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.407 [2024-07-22 12:11:36.293825] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.407 Malloc0 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.407 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 [2024-07-22 12:11:36.355684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=977298 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 977298 /var/tmp/bdevperf.sock 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 977298 ']' 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.664 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 [2024-07-22 12:11:36.403262] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:28.664 [2024-07-22 12:11:36.403325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977298 ] 00:16:28.664 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.664 [2024-07-22 12:11:36.435194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:28.664 [2024-07-22 12:11:36.465251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.664 [2024-07-22 12:11:36.555874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:28.922 NVMe0n1 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.922 12:11:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:29.180 Running I/O for 10 seconds... 00:16:39.146 00:16:39.146 Latency(us) 00:16:39.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.146 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:39.146 Verification LBA range: start 0x0 length 0x4000 00:16:39.146 NVMe0n1 : 10.09 8617.64 33.66 0.00 0.00 118332.06 25049.32 76895.57 00:16:39.146 =================================================================================================================== 00:16:39.146 Total : 8617.64 33.66 0.00 0.00 118332.06 25049.32 76895.57 00:16:39.146 0 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 977298 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 977298 ']' 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 977298 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 977298 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 977298' 00:16:39.146 killing process with pid 977298 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 977298 00:16:39.146 Received shutdown signal, test time was about 10.000000 seconds 00:16:39.146 00:16:39.146 Latency(us) 00:16:39.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.146 =================================================================================================================== 00:16:39.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.146 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 977298 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.403 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.403 rmmod nvme_tcp 00:16:39.403 rmmod nvme_fabrics 00:16:39.403 rmmod nvme_keyring 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 977272 ']' 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 977272 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 977272 ']' 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 977272 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 977272 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:39.659 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 977272' 00:16:39.659 killing process with pid 977272 00:16:39.660 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 977272 00:16:39.660 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 977272 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.918 12:11:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.825 12:11:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.825 00:16:41.825 real 0m15.994s 00:16:41.825 user 0m22.480s 00:16:41.825 sys 0m3.073s 00:16:41.825 12:11:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.825 12:11:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:41.825 ************************************ 00:16:41.825 END TEST nvmf_queue_depth 00:16:41.825 ************************************ 00:16:41.825 12:11:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.825 12:11:49 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:41.825 12:11:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.825 12:11:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.825 12:11:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.825 ************************************ 00:16:41.825 START TEST nvmf_target_multipath 00:16:41.825 ************************************ 00:16:41.825 12:11:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:42.083 * Looking for test storage... 00:16:42.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.083 12:11:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:43.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.981 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:43.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:43.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:43.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:16:43.982 00:16:43.982 --- 10.0.0.2 ping statistics --- 00:16:43.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.982 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:16:43.982 00:16:43.982 --- 10.0.0.1 ping statistics --- 00:16:43.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.982 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:43.982 only one NIC for nvmf test 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.982 rmmod nvme_tcp 00:16:43.982 rmmod nvme_fabrics 00:16:43.982 rmmod nvme_keyring 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.982 12:11:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.514 00:16:46.514 real 0m4.168s 00:16:46.514 user 0m0.776s 00:16:46.514 sys 0m1.372s 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.514 12:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:46.514 ************************************ 00:16:46.514 END TEST nvmf_target_multipath 00:16:46.514 ************************************ 00:16:46.514 12:11:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:46.514 12:11:53 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:46.514 12:11:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:46.514 12:11:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.514 12:11:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.514 ************************************ 00:16:46.514 START TEST nvmf_zcopy 00:16:46.514 ************************************ 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:46.514 * Looking for test storage... 00:16:46.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.514 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.515 12:11:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.431 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.432 12:11:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:16:48.432 00:16:48.432 --- 10.0.0.2 ping statistics --- 00:16:48.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.432 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:16:48.432 00:16:48.432 --- 10.0.0.1 ping statistics --- 00:16:48.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.432 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=982338 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 982338 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 982338 ']' 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.432 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.432 [2024-07-22 12:11:56.102577] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:48.432 [2024-07-22 12:11:56.102685] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.432 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.432 [2024-07-22 12:11:56.146668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:48.432 [2024-07-22 12:11:56.174267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.432 [2024-07-22 12:11:56.261593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.432 [2024-07-22 12:11:56.261673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.432 [2024-07-22 12:11:56.261702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.432 [2024-07-22 12:11:56.261714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.432 [2024-07-22 12:11:56.261724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.432 [2024-07-22 12:11:56.261751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 [2024-07-22 12:11:56.404196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 [2024-07-22 12:11:56.420381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 malloc0 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:48.711 { 00:16:48.711 "params": { 00:16:48.711 "name": "Nvme$subsystem", 00:16:48.711 "trtype": "$TEST_TRANSPORT", 00:16:48.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:48.711 "adrfam": "ipv4", 00:16:48.711 "trsvcid": "$NVMF_PORT", 00:16:48.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:48.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:48.711 "hdgst": ${hdgst:-false}, 00:16:48.711 "ddgst": ${ddgst:-false} 00:16:48.711 }, 00:16:48.711 "method": "bdev_nvme_attach_controller" 00:16:48.711 } 00:16:48.711 EOF 00:16:48.711 )") 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:48.711 12:11:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:48.711 "params": { 00:16:48.711 "name": "Nvme1", 00:16:48.711 "trtype": "tcp", 00:16:48.711 "traddr": "10.0.0.2", 00:16:48.711 "adrfam": "ipv4", 00:16:48.711 "trsvcid": "4420", 00:16:48.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.711 "hdgst": false, 00:16:48.711 "ddgst": false 00:16:48.711 }, 00:16:48.711 "method": "bdev_nvme_attach_controller" 00:16:48.711 }' 00:16:48.711 [2024-07-22 12:11:56.500419] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:48.712 [2024-07-22 12:11:56.500498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982478 ] 00:16:48.712 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.712 [2024-07-22 12:11:56.532834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:48.712 [2024-07-22 12:11:56.565204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.979 [2024-07-22 12:11:56.660803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.236 Running I/O for 10 seconds... 00:16:59.195 00:16:59.195 Latency(us) 00:16:59.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.195 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:59.195 Verification LBA range: start 0x0 length 0x1000 00:16:59.195 Nvme1n1 : 10.01 5826.25 45.52 0.00 0.00 21911.13 4150.61 29903.83 00:16:59.195 =================================================================================================================== 00:16:59.195 Total : 5826.25 45.52 0.00 0.00 21911.13 4150.61 29903.83 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=983789 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.453 { 00:16:59.453 "params": { 00:16:59.453 "name": "Nvme$subsystem", 00:16:59.453 "trtype": "$TEST_TRANSPORT", 00:16:59.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.453 "adrfam": "ipv4", 00:16:59.453 "trsvcid": "$NVMF_PORT", 00:16:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.453 "hdgst": ${hdgst:-false}, 00:16:59.453 "ddgst": ${ddgst:-false} 00:16:59.453 }, 00:16:59.453 "method": "bdev_nvme_attach_controller" 00:16:59.453 } 00:16:59.453 EOF 00:16:59.453 )") 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:59.453 [2024-07-22 12:12:07.240767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.240813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:59.453 12:12:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.453 "params": { 00:16:59.453 "name": "Nvme1", 00:16:59.453 "trtype": "tcp", 00:16:59.453 "traddr": "10.0.0.2", 00:16:59.453 "adrfam": "ipv4", 00:16:59.453 "trsvcid": "4420", 00:16:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.453 "hdgst": false, 00:16:59.453 "ddgst": false 00:16:59.453 }, 00:16:59.453 "method": "bdev_nvme_attach_controller" 00:16:59.453 }' 00:16:59.453 [2024-07-22 12:12:07.248706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.248730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.256711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.256735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.264713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.264734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.272735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.272755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.280228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:16:59.453 [2024-07-22 12:12:07.280300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983789 ] 00:16:59.453 [2024-07-22 12:12:07.280764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.280785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.288785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.288806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.296804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.296825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.304826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.304846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.453 [2024-07-22 12:12:07.312848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.312870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.314657] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:59.453 [2024-07-22 12:12:07.320870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.320907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.328912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.328939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.336928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.336952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.344954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.344978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.346174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.453 [2024-07-22 12:12:07.353016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.353049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.361041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.361079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.369019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.369044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.453 [2024-07-22 12:12:07.377041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.453 [2024-07-22 12:12:07.377064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.385086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.711 [2024-07-22 12:12:07.385117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.393103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.711 [2024-07-22 12:12:07.393133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.401156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.711 [2024-07-22 12:12:07.401194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.409156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.711 [2024-07-22 12:12:07.409188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.417163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.711 [2024-07-22 12:12:07.417197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.425183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.711 [2024-07-22 12:12:07.425208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.711 [2024-07-22 12:12:07.433204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.433229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.440604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.712 [2024-07-22 12:12:07.441225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.441249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.449247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.449272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.457297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.457333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.465326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.465367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.473358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.473417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.481375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.481417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.489397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.489450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.497419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.497461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.505424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.505458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.513443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.513472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.521484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.521525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.529512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.529556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.537494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.537518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.545515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.545540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.553709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.553741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.561640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.561684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.569680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.569705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.577698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.577724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.585717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.585741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.593739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.593764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.601758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.601781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.609758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.609780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.617782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.617803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.625805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.625831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.633847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.633882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.712 [2024-07-22 12:12:07.641865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.712 [2024-07-22 12:12:07.641917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.649875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.649914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.657909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.657931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.665931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.665952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.673968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.673995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.682003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.682033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.690020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.690046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.698045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.698071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.706065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.706089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.714099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.714123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.722119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.722145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.730136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.730161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.738187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.738217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.746207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.746234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 Running I/O for 5 seconds... 00:16:59.971 [2024-07-22 12:12:07.758702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.758731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.770365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.770393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.782385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.782412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.795138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.795187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.807754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.807782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.820931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.820964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.833503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.833534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.846252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.846280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.858658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.858685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.871416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.871443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.884291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.884318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.971 [2024-07-22 12:12:07.897316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.971 [2024-07-22 12:12:07.897361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.910410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.910437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.922905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.922932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.935437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.935464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.947969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.947996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.960930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.960972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.973901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.973928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.986223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.986249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:07.998467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:07.998493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:08.010518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:08.010545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:08.023255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:08.023282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.232 [2024-07-22 12:12:08.036255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.232 [2024-07-22 12:12:08.036282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.049265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.049293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.061528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.061555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.073365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.073408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.086037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.086071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.098139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.098166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.110984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.111025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.123588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.123639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.136539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.136566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.149507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.149534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.233 [2024-07-22 12:12:08.162284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.233 [2024-07-22 12:12:08.162312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.175200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.175227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.187958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.187984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.200826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.200853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.213527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.213554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.226420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.226447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.238795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.238823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.251441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.251467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.263591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.263641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.276070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.276098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.289059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.289086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.302121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.302167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.314927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.314955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.327695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.327724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.492 [2024-07-22 12:12:08.340702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.492 [2024-07-22 12:12:08.340731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.493 [2024-07-22 12:12:08.353177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.493 [2024-07-22 12:12:08.353204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.493 [2024-07-22 12:12:08.365320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.493 [2024-07-22 12:12:08.365347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.493 [2024-07-22 12:12:08.377815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.493 [2024-07-22 12:12:08.377842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.493 [2024-07-22 12:12:08.390383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.493 [2024-07-22 12:12:08.390409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.493 [2024-07-22 12:12:08.403027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.493 [2024-07-22 12:12:08.403054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.493 [2024-07-22 12:12:08.415735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.493 [2024-07-22 12:12:08.415763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.750 [2024-07-22 12:12:08.428822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.428850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.440974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.441002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.452960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.452987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.464726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.464753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.476629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.476671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.488684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.488711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.501107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.501134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.514096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.514122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.526712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.526739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.539356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.539382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.552228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.552256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.564913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.564939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.577497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.577523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.590025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.590052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.602388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.602415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.615082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.615121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.627238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.627264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.639896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.639937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.652353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.652380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.664569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.664621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.751 [2024-07-22 12:12:08.676895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.751 [2024-07-22 12:12:08.676940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.689950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.689994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.703271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.703304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.717100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.717146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.730955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.730987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.744360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.744394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.758685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.758717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.772270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.772298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.785502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.785535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.798798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.798830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.813076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.813113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.826356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.826389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.839580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.839625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.852749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.852777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.865368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.865394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.878706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.878733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.891192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.891223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.903991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.904020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.916236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.916263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.928494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.011 [2024-07-22 12:12:08.928521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.011 [2024-07-22 12:12:08.941260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.012 [2024-07-22 12:12:08.941287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:08.954117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:08.954144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:08.966883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:08.966925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:08.979223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:08.979249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:08.992239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:08.992287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.005369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.005396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.018159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.018186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.031118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.031145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.044461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.044498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.058123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.058155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.070992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.071035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.083413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.083440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.096142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.096169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.108874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.108918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.121413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.121439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.134508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.134534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.147161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.147188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.160199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.160226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.172944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.172972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.184875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.184916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.270 [2024-07-22 12:12:09.198135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.270 [2024-07-22 12:12:09.198178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.211362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.211389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.223870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.223912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.236359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.236393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.249077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.249110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.261669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.261700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.274579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.274610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.286436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.286463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.298976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.299003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.311717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.311745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.324476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.324503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.336943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.336974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.349188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.349215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.361148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.361175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.373094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.373138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.385467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.385493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.398312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.398339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.410951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.410980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.423331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.423357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.436336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.436380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.529 [2024-07-22 12:12:09.449074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.529 [2024-07-22 12:12:09.449101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.462844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.462873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.475523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.475557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.487737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.487765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.500061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.500088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.512699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.512727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.525319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.525346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.537284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.537310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.550324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.550351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.563303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.563330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.576031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.576058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.589037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.589063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.601887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.601919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.614146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.614173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.626844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.626872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.639519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.639545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.652465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.652491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.665316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.665342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.678529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.678556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.691332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.691358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.704249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.704282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.788 [2024-07-22 12:12:09.718173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.788 [2024-07-22 12:12:09.718218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.730870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.730913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.743670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.743698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.755794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.755822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.768053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.768080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.780100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.780127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.792353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.792380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.804945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.804988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.817404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.817431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.830811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.830839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.843748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.045 [2024-07-22 12:12:09.843777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.045 [2024-07-22 12:12:09.856091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.856135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.868305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.868331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.880916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.880943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.893356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.893383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.906570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.906610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.920269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.920296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.933404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.933432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.945991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.946038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.958418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.958445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.046 [2024-07-22 12:12:09.970977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.046 [2024-07-22 12:12:09.971014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:09.983476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:09.983502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:09.995494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:09.995521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.008409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.008460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.021202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.021237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.033677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.033712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.046167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.046197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.058623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.058651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.071636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.071665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.084246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.084278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.096890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.096938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.109979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.110006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.123117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.123145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.135500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.135527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.147353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.147381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.160027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.160055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.171364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.171391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.183256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.183284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.195701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.195729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.208245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.208274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.220390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.220419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.303 [2024-07-22 12:12:10.233000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.303 [2024-07-22 12:12:10.233029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.245651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.245679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.257343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.257371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.269763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.269792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.281993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.282022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.293536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.293565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.305733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.305762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.317971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.318014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.330120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.330149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.342765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.342794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.355375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.355403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.367802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.367830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.379928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.379956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.392246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.392273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.404697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.404726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.417177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.417205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.429441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.429469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.441877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.441920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.454677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.454705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.466887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.466929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.561 [2024-07-22 12:12:10.479523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.561 [2024-07-22 12:12:10.479550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.492135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.492163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.504666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.504694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.517903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.517946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.530697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.530726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.542632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.542660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.554620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.554648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.566913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.566951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.818 [2024-07-22 12:12:10.578926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.818 [2024-07-22 12:12:10.578954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.591144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.591172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.603377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.603405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.616017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.616045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.629577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.629629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.641279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.641307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.653476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.653512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.666010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.666037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.677820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.677849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.689977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.690005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.702227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.702255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.714396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.714424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.726324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.726352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.819 [2024-07-22 12:12:10.738536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.819 [2024-07-22 12:12:10.738564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.750739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.750769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.762784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.762812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.775881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.775924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.788785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.788815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.801362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.801389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.814010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.814036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.825931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.825974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.838400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.838426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.851242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.851270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.863935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.863961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.876454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.876481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.889215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.889250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.901743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.076 [2024-07-22 12:12:10.901787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.076 [2024-07-22 12:12:10.913884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.913928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:10.926864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.926906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:10.939396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.939423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:10.952347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.952374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:10.964936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.964963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:10.977533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.977562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:10.990255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:10.990284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.077 [2024-07-22 12:12:11.002718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.077 [2024-07-22 12:12:11.002747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.014961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.014989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.027261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.027289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.039548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.039575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.051822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.051851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.064348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.064375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.077011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.077037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.089532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.089560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.102545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.334 [2024-07-22 12:12:11.102587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.334 [2024-07-22 12:12:11.114917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.114949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.127990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.128038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.140469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.140496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.152930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.152957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.165477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.165504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.178142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.178169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.190944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.190988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.203289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.203316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.216184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.216211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.228369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.228396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.240739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.240766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.335 [2024-07-22 12:12:11.253276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.335 [2024-07-22 12:12:11.253302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.592 [2024-07-22 12:12:11.266017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.592 [2024-07-22 12:12:11.266045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.592 [2024-07-22 12:12:11.278555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.592 [2024-07-22 12:12:11.278582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.290777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.290804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.303217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.303244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.315416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.315442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.328054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.328082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.340526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.340553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.354018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.354045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.366853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.366905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.379390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.379416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.392117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.392158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.404355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.404397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.417212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.417239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.429477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.429503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.442269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.442296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.455004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.455032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.467179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.467222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.479640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.479667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.491671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.491698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.503907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.503934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.593 [2024-07-22 12:12:11.515872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.593 [2024-07-22 12:12:11.515916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.850 [2024-07-22 12:12:11.527921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.850 [2024-07-22 12:12:11.527947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.850 [2024-07-22 12:12:11.540362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.850 [2024-07-22 12:12:11.540389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.850 [2024-07-22 12:12:11.552958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.850 [2024-07-22 12:12:11.552985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.850 [2024-07-22 12:12:11.565449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.850 [2024-07-22 12:12:11.565475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.850 [2024-07-22 12:12:11.577347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.850 [2024-07-22 12:12:11.577374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.589395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.589421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.602056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.602094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.614383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.614409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.626722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.626749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.638838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.638883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.651467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.651493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.664766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.664795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.677737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.677765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.690248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.690274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.702791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.702825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.715760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.715788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.728213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.728242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.740323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.740350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.753117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.753144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.765713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.765741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:03.851 [2024-07-22 12:12:11.778265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:03.851 [2024-07-22 12:12:11.778294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.790819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.790848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.803667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.803706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.815921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.815949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.827947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.827975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.840039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.840066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.852118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.852146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.864758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.864785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.877299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.877326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.890297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.890326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.903993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.904020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.917116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.917144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.930267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.930294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.942867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.942896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.955688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.955716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.967801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.967844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.980158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.980186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:11.992383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:11.992410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:12.004958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:12.004985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:12.017552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:12.017579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.108 [2024-07-22 12:12:12.029279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.108 [2024-07-22 12:12:12.029307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.041589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.041624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.054422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.054450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.067829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.067857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.079201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.079228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.092534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.092561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.105084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.105113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.117879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.117908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.130864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.130894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.143798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.143828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.156093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.156147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.168113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.168140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.180329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.180357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.364 [2024-07-22 12:12:12.193229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.364 [2024-07-22 12:12:12.193256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.205366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.205394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.217649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.217677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.230393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.230421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.243120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.243148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.255567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.255595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.268042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.268070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.281196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.281225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.365 [2024-07-22 12:12:12.294462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.365 [2024-07-22 12:12:12.294491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.306893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.306936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.319118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.319146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.331835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.331863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.344650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.344678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.357536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.357563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.369422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.369451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.381772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.381800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.394001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.394029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.406046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.406088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.418233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.418261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.430488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.430515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.443247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.443275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.455875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.455904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.468290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.468317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.480626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.480654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.493095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.493123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.506048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.506074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.518330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.518371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.530663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.530689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.621 [2024-07-22 12:12:12.543105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.621 [2024-07-22 12:12:12.543132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.555335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.555362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.567841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.567869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.580422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.580449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.593025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.593053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.605390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.605417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.617501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.617529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.634304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.634331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.646038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.646079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.659339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.659365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.672081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.672109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.684496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.877 [2024-07-22 12:12:12.684523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.877 [2024-07-22 12:12:12.697208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.697235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.878 [2024-07-22 12:12:12.709750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.709777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.878 [2024-07-22 12:12:12.722299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.722325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.878 [2024-07-22 12:12:12.734744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.734772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.878 [2024-07-22 12:12:12.746877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.746923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.878 [2024-07-22 12:12:12.758677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.758705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.878 [2024-07-22 12:12:12.769522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.878 [2024-07-22 12:12:12.769549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.810779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.810814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 00:17:05.134 Latency(us) 00:17:05.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.134 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:05.134 Nvme1n1 : 5.05 10022.97 78.30 0.00 0.00 12650.34 5606.97 52428.80 00:17:05.134 =================================================================================================================== 00:17:05.134 Total : 10022.97 78.30 0.00 0.00 12650.34 5606.97 52428.80 00:17:05.134 [2024-07-22 12:12:12.818368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.818397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.826389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.826418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.834462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.834509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.842490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.842540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.850509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.850559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.858537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.858589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.866544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.866592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.874578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.874640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.882598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.882660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.890630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.890688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.898681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.898729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.906683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.906733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.914703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.914751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.922718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.922768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.930778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.930834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.938767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.938832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.946772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.946819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.954752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.954776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.962812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.962868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.970833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.970879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.978870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.978930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.986825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.986847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:12.994881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:12.994917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:13.002940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:13.002988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:13.010962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:13.011012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:13.018929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:13.018950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:13.026941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:13.026976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 [2024-07-22 12:12:13.034980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.134 [2024-07-22 12:12:13.035005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (983789) - No such process 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 983789 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.134 delay0 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.134 12:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:05.391 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.391 [2024-07-22 12:12:13.156694] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:11.950 Initializing NVMe Controllers 00:17:11.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.950 Initialization complete. Launching workers. 00:17:11.950 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 849 00:17:11.950 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1136, failed to submit 33 00:17:11.950 success 963, unsuccess 173, failed 0 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.950 rmmod nvme_tcp 00:17:11.950 rmmod nvme_fabrics 00:17:11.950 rmmod nvme_keyring 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 982338 ']' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 982338 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 982338 ']' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 982338 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 982338 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 982338' 00:17:11.950 killing process with pid 982338 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 982338 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 982338 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.950 12:12:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.550 12:12:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.550 00:17:14.550 real 0m27.947s 00:17:14.550 user 0m41.420s 00:17:14.550 sys 0m8.271s 00:17:14.550 12:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.550 12:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 ************************************ 00:17:14.550 END TEST nvmf_zcopy 00:17:14.550 ************************************ 00:17:14.550 12:12:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:14.550 12:12:21 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:14.550 12:12:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:14.550 12:12:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.550 12:12:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 ************************************ 00:17:14.550 START TEST nvmf_nmic 00:17:14.550 ************************************ 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:14.550 * Looking for test storage... 00:17:14.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.550 12:12:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.551 12:12:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.447 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.448 12:12:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:17:16.448 00:17:16.448 --- 10.0.0.2 ping statistics --- 00:17:16.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.448 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:16.448 00:17:16.448 --- 10.0.0.1 ping statistics --- 00:17:16.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.448 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=987672 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 987672 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 987672 ']' 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.448 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.448 [2024-07-22 12:12:24.149800] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:17:16.448 [2024-07-22 12:12:24.149879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.448 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.448 [2024-07-22 12:12:24.188112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.448 [2024-07-22 12:12:24.220409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.448 [2024-07-22 12:12:24.315588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.448 [2024-07-22 12:12:24.315661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.448 [2024-07-22 12:12:24.315686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.448 [2024-07-22 12:12:24.315700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.448 [2024-07-22 12:12:24.315712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.448 [2024-07-22 12:12:24.315794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.448 [2024-07-22 12:12:24.315849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.448 [2024-07-22 12:12:24.315902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.449 [2024-07-22 12:12:24.315905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 [2024-07-22 12:12:24.481503] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 Malloc0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 [2024-07-22 12:12:24.532791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:16.707 test case1: single bdev can't be used in multiple subsystems 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 [2024-07-22 12:12:24.556589] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:16.707 [2024-07-22 12:12:24.556642] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:16.707 [2024-07-22 12:12:24.556674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.707 request: 00:17:16.707 { 00:17:16.707 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:16.707 "namespace": { 00:17:16.707 "bdev_name": "Malloc0", 00:17:16.707 "no_auto_visible": false 00:17:16.707 }, 00:17:16.707 "method": "nvmf_subsystem_add_ns", 00:17:16.707 "req_id": 1 00:17:16.707 } 00:17:16.707 Got JSON-RPC error response 00:17:16.707 response: 00:17:16.707 { 00:17:16.707 "code": -32602, 00:17:16.707 "message": "Invalid parameters" 00:17:16.707 } 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:16.707 Adding namespace failed - expected result. 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:16.707 test case2: host connect to nvmf target in multiple paths 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 [2024-07-22 12:12:24.564724] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.707 12:12:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.271 12:12:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:18.202 12:12:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.202 12:12:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:18.202 12:12:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.202 12:12:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:18.202 12:12:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:20.098 12:12:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:20.098 [global] 00:17:20.098 thread=1 00:17:20.098 invalidate=1 00:17:20.098 rw=write 00:17:20.098 time_based=1 00:17:20.098 runtime=1 00:17:20.098 ioengine=libaio 00:17:20.098 direct=1 00:17:20.098 bs=4096 00:17:20.098 iodepth=1 00:17:20.098 norandommap=0 00:17:20.098 numjobs=1 00:17:20.098 00:17:20.098 verify_dump=1 00:17:20.098 verify_backlog=512 00:17:20.098 verify_state_save=0 00:17:20.098 do_verify=1 00:17:20.098 verify=crc32c-intel 00:17:20.098 [job0] 00:17:20.098 filename=/dev/nvme0n1 00:17:20.098 Could not set queue depth (nvme0n1) 00:17:20.355 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.355 fio-3.35 00:17:20.355 Starting 1 thread 00:17:21.287 00:17:21.287 job0: (groupid=0, jobs=1): err= 0: pid=988188: Mon Jul 22 12:12:29 2024 00:17:21.287 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:17:21.287 slat (nsec): min=15312, max=19491, avg=16050.38, stdev=882.46 00:17:21.287 clat (usec): min=40541, max=42077, avg=41247.51, stdev=499.99 00:17:21.287 lat (usec): min=40561, max=42093, avg=41263.56, stdev=499.75 00:17:21.287 clat percentiles (usec): 00:17:21.287 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:21.287 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:21.287 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:21.287 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:21.287 | 99.99th=[42206] 00:17:21.287 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:17:21.287 slat (usec): min=7, max=29136, avg=76.05, stdev=1286.83 00:17:21.287 clat (usec): min=165, max=333, avg=207.00, stdev=17.68 00:17:21.287 lat (usec): min=174, max=29422, avg=283.05, stdev=1290.46 00:17:21.287 clat percentiles (usec): 00:17:21.287 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:17:21.287 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:17:21.287 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 241], 00:17:21.287 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 334], 99.95th=[ 334], 00:17:21.287 | 99.99th=[ 334] 00:17:21.287 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:21.287 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:21.287 lat (usec) : 250=94.75%, 500=1.31% 00:17:21.287 lat (msec) : 50=3.94% 00:17:21.287 cpu : usr=0.69%, sys=1.18%, ctx=536, majf=0, minf=2 00:17:21.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.287 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.287 00:17:21.287 Run status group 0 (all jobs): 00:17:21.287 READ: bw=82.8KiB/s (84.8kB/s), 82.8KiB/s-82.8KiB/s (84.8kB/s-84.8kB/s), io=84.0KiB (86.0kB), run=1014-1014msec 00:17:21.287 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:17:21.287 00:17:21.287 Disk stats (read/write): 00:17:21.287 nvme0n1: ios=44/512, merge=0/0, ticks=1733/100, in_queue=1833, util=98.70% 00:17:21.287 12:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.545 rmmod nvme_tcp 00:17:21.545 rmmod nvme_fabrics 00:17:21.545 rmmod nvme_keyring 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 987672 ']' 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 987672 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 987672 ']' 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 987672 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987672 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987672' 00:17:21.545 killing process with pid 987672 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 987672 00:17:21.545 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 987672 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.805 12:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.338 12:12:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.338 00:17:24.338 real 0m9.850s 00:17:24.338 user 0m22.264s 00:17:24.338 sys 0m2.281s 00:17:24.338 12:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.338 12:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:24.339 ************************************ 00:17:24.339 END TEST nvmf_nmic 00:17:24.339 ************************************ 00:17:24.339 12:12:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:24.339 12:12:31 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:24.339 12:12:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:24.339 12:12:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.339 12:12:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.339 ************************************ 00:17:24.339 START TEST nvmf_fio_target 00:17:24.339 ************************************ 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:24.339 * Looking for test storage... 00:17:24.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.339 12:12:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.235 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:17:26.236 00:17:26.236 --- 10.0.0.2 ping statistics --- 00:17:26.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.236 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:17:26.236 00:17:26.236 --- 10.0.0.1 ping statistics --- 00:17:26.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.236 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.236 12:12:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=990381 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 990381 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 990381 ']' 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.236 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.236 [2024-07-22 12:12:34.075589] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:17:26.236 [2024-07-22 12:12:34.075701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.236 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.236 [2024-07-22 12:12:34.114484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:26.236 [2024-07-22 12:12:34.146885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.493 [2024-07-22 12:12:34.241499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.493 [2024-07-22 12:12:34.241564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.493 [2024-07-22 12:12:34.241592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.493 [2024-07-22 12:12:34.241605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.493 [2024-07-22 12:12:34.241626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.493 [2024-07-22 12:12:34.241699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.493 [2024-07-22 12:12:34.241763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.493 [2024-07-22 12:12:34.241814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.493 [2024-07-22 12:12:34.241817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.493 12:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:26.749 [2024-07-22 12:12:34.672527] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.005 12:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:27.262 12:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:27.262 12:12:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:27.519 12:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:27.519 12:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:27.776 12:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:27.776 12:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:28.032 12:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:28.032 12:12:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:28.288 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:28.545 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:28.545 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:28.802 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:28.802 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:29.059 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:29.059 12:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:29.316 12:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:29.573 12:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:29.573 12:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.830 12:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:29.830 12:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:30.088 12:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.088 [2024-07-22 12:12:37.999366] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.088 12:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:30.345 12:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:30.602 12:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:31.575 12:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:31.575 12:12:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:31.575 12:12:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:31.575 12:12:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:31.575 12:12:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:31.575 12:12:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:33.472 12:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:33.472 [global] 00:17:33.472 thread=1 00:17:33.472 invalidate=1 00:17:33.472 rw=write 00:17:33.472 time_based=1 00:17:33.472 runtime=1 00:17:33.472 ioengine=libaio 00:17:33.472 direct=1 00:17:33.472 bs=4096 00:17:33.472 iodepth=1 00:17:33.472 norandommap=0 00:17:33.472 numjobs=1 00:17:33.472 00:17:33.472 verify_dump=1 00:17:33.472 verify_backlog=512 00:17:33.472 verify_state_save=0 00:17:33.472 do_verify=1 00:17:33.472 verify=crc32c-intel 00:17:33.472 [job0] 00:17:33.472 filename=/dev/nvme0n1 00:17:33.472 [job1] 00:17:33.472 filename=/dev/nvme0n2 00:17:33.472 [job2] 00:17:33.472 filename=/dev/nvme0n3 00:17:33.472 [job3] 00:17:33.472 filename=/dev/nvme0n4 00:17:33.472 Could not set queue depth (nvme0n1) 00:17:33.472 Could not set queue depth (nvme0n2) 00:17:33.472 Could not set queue depth (nvme0n3) 00:17:33.472 Could not set queue depth (nvme0n4) 00:17:33.472 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:33.472 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:33.472 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:33.472 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:33.472 fio-3.35 00:17:33.472 Starting 4 threads 00:17:34.848 00:17:34.848 job0: (groupid=0, jobs=1): err= 0: pid=991329: Mon Jul 22 12:12:42 2024 00:17:34.848 read: IOPS=23, BW=95.7KiB/s (98.0kB/s)(96.0KiB/1003msec) 00:17:34.848 slat (nsec): min=6895, max=33238, avg=26456.83, stdev=9237.08 00:17:34.848 clat (usec): min=382, max=41013, avg=36391.46, stdev=12474.21 00:17:34.848 lat (usec): min=400, max=41045, avg=36417.92, stdev=12479.62 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 383], 5.00th=[ 519], 10.00th=[12780], 20.00th=[41157], 00:17:34.848 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:34.848 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:34.848 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:34.848 | 99.99th=[41157] 00:17:34.848 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:17:34.848 slat (nsec): min=6508, max=46259, avg=12017.13, stdev=7091.21 00:17:34.848 clat (usec): min=177, max=2054, avg=235.32, stdev=87.17 00:17:34.848 lat (usec): min=185, max=2065, avg=247.33, stdev=88.00 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:17:34.848 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:17:34.848 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:17:34.848 | 99.00th=[ 379], 99.50th=[ 478], 99.90th=[ 2057], 99.95th=[ 2057], 00:17:34.848 | 99.99th=[ 2057] 00:17:34.848 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:17:34.848 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:34.848 lat (usec) : 250=75.19%, 500=20.15%, 750=0.37% 00:17:34.848 lat (msec) : 4=0.19%, 20=0.19%, 50=3.92% 00:17:34.848 cpu : usr=0.40%, sys=0.70%, ctx=536, majf=0, minf=2 00:17:34.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:34.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.848 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:34.848 job1: (groupid=0, jobs=1): err= 0: pid=991330: Mon Jul 22 12:12:42 2024 00:17:34.848 read: IOPS=517, BW=2072KiB/s (2121kB/s)(2136KiB/1031msec) 00:17:34.848 slat (nsec): min=6418, max=56416, avg=14546.63, stdev=6826.73 00:17:34.848 clat (usec): min=285, max=42510, avg=1416.00, stdev=6614.00 00:17:34.848 lat (usec): min=293, max=42544, avg=1430.54, stdev=6616.98 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:17:34.848 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:17:34.848 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 367], 00:17:34.848 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:17:34.848 | 99.99th=[42730] 00:17:34.848 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:17:34.848 slat (usec): min=8, max=15066, avg=32.75, stdev=470.34 00:17:34.848 clat (usec): min=167, max=468, avg=218.87, stdev=30.67 00:17:34.848 lat (usec): min=177, max=15330, avg=251.61, stdev=472.77 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 200], 00:17:34.848 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:17:34.848 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 269], 00:17:34.848 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 408], 99.95th=[ 469], 00:17:34.848 | 99.99th=[ 469] 00:17:34.848 bw ( KiB/s): min= 8192, max= 8192, per=82.72%, avg=8192.00, stdev= 0.00, samples=1 00:17:34.848 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:34.848 lat (usec) : 250=57.06%, 500=41.98%, 750=0.06% 00:17:34.848 lat (msec) : 50=0.90% 00:17:34.848 cpu : usr=1.75%, sys=3.30%, ctx=1560, majf=0, minf=1 00:17:34.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:34.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.848 issued rwts: total=534,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:34.848 job2: (groupid=0, jobs=1): err= 0: pid=991331: Mon Jul 22 12:12:42 2024 00:17:34.848 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(100KiB/1034msec) 00:17:34.848 slat (nsec): min=8114, max=39447, avg=30534.44, stdev=7912.09 00:17:34.848 clat (usec): min=389, max=41579, avg=36138.51, stdev=13462.24 00:17:34.848 lat (usec): min=423, max=41612, avg=36169.04, stdev=13461.00 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 392], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[41157], 00:17:34.848 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:34.848 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:34.848 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:34.848 | 99.99th=[41681] 00:17:34.848 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:17:34.848 slat (nsec): min=6561, max=59156, avg=11586.33, stdev=6666.34 00:17:34.848 clat (usec): min=181, max=607, avg=237.60, stdev=36.70 00:17:34.848 lat (usec): min=189, max=624, avg=249.19, stdev=37.59 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 212], 00:17:34.848 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 239], 00:17:34.848 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 306], 00:17:34.848 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 611], 99.95th=[ 611], 00:17:34.848 | 99.99th=[ 611] 00:17:34.848 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:17:34.848 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:34.848 lat (usec) : 250=70.02%, 500=25.70%, 750=0.19% 00:17:34.848 lat (msec) : 50=4.10% 00:17:34.848 cpu : usr=0.48%, sys=0.39%, ctx=537, majf=0, minf=1 00:17:34.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:34.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.848 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:34.848 job3: (groupid=0, jobs=1): err= 0: pid=991332: Mon Jul 22 12:12:42 2024 00:17:34.848 read: IOPS=28, BW=112KiB/s (115kB/s)(116KiB/1033msec) 00:17:34.848 slat (nsec): min=17339, max=41726, avg=32015.14, stdev=6706.42 00:17:34.848 clat (usec): min=367, max=44014, avg=29967.62, stdev=18564.21 00:17:34.848 lat (usec): min=400, max=44056, avg=29999.64, stdev=18562.79 00:17:34.848 clat percentiles (usec): 00:17:34.848 | 1.00th=[ 367], 5.00th=[ 404], 10.00th=[ 404], 20.00th=[ 429], 00:17:34.848 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:34.848 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:17:34.849 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:17:34.849 | 99.99th=[43779] 00:17:34.849 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:17:34.849 slat (nsec): min=7113, max=60007, avg=14995.62, stdev=8878.63 00:17:34.849 clat (usec): min=197, max=503, avg=298.07, stdev=60.77 00:17:34.849 lat (usec): min=206, max=516, avg=313.07, stdev=63.74 00:17:34.849 clat percentiles (usec): 00:17:34.849 | 1.00th=[ 202], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 243], 00:17:34.849 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 310], 00:17:34.849 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 412], 00:17:34.849 | 99.00th=[ 453], 99.50th=[ 457], 99.90th=[ 502], 99.95th=[ 502], 00:17:34.849 | 99.99th=[ 502] 00:17:34.849 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:17:34.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:34.849 lat (usec) : 250=23.84%, 500=71.90%, 750=0.37% 00:17:34.849 lat (msec) : 50=3.88% 00:17:34.849 cpu : usr=0.48%, sys=0.97%, ctx=541, majf=0, minf=1 00:17:34.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.849 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:34.849 00:17:34.849 Run status group 0 (all jobs): 00:17:34.849 READ: bw=2368KiB/s (2424kB/s), 95.7KiB/s-2072KiB/s (98.0kB/s-2121kB/s), io=2448KiB (2507kB), run=1003-1034msec 00:17:34.849 WRITE: bw=9903KiB/s (10.1MB/s), 1981KiB/s-3973KiB/s (2028kB/s-4068kB/s), io=10.0MiB (10.5MB), run=1003-1034msec 00:17:34.849 00:17:34.849 Disk stats (read/write): 00:17:34.849 nvme0n1: ios=70/512, merge=0/0, ticks=739/117, in_queue=856, util=86.67% 00:17:34.849 nvme0n2: ios=575/1024, merge=0/0, ticks=1464/207, in_queue=1671, util=97.66% 00:17:34.849 nvme0n3: ios=17/512, merge=0/0, ticks=698/114, in_queue=812, util=88.89% 00:17:34.849 nvme0n4: ios=17/512, merge=0/0, ticks=660/136, in_queue=796, util=89.64% 00:17:34.849 12:12:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:34.849 [global] 00:17:34.849 thread=1 00:17:34.849 invalidate=1 00:17:34.849 rw=randwrite 00:17:34.849 time_based=1 00:17:34.849 runtime=1 00:17:34.849 ioengine=libaio 00:17:34.849 direct=1 00:17:34.849 bs=4096 00:17:34.849 iodepth=1 00:17:34.849 norandommap=0 00:17:34.849 numjobs=1 00:17:34.849 00:17:34.849 verify_dump=1 00:17:34.849 verify_backlog=512 00:17:34.849 verify_state_save=0 00:17:34.849 do_verify=1 00:17:34.849 verify=crc32c-intel 00:17:34.849 [job0] 00:17:34.849 filename=/dev/nvme0n1 00:17:34.849 [job1] 00:17:34.849 filename=/dev/nvme0n2 00:17:34.849 [job2] 00:17:34.849 filename=/dev/nvme0n3 00:17:34.849 [job3] 00:17:34.849 filename=/dev/nvme0n4 00:17:34.849 Could not set queue depth (nvme0n1) 00:17:34.849 Could not set queue depth (nvme0n2) 00:17:34.849 Could not set queue depth (nvme0n3) 00:17:34.849 Could not set queue depth (nvme0n4) 00:17:35.106 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:35.106 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:35.106 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:35.106 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:35.106 fio-3.35 00:17:35.106 Starting 4 threads 00:17:36.474 00:17:36.474 job0: (groupid=0, jobs=1): err= 0: pid=991676: Mon Jul 22 12:12:44 2024 00:17:36.474 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:36.474 slat (nsec): min=4665, max=40918, avg=8923.18, stdev=4252.99 00:17:36.474 clat (usec): min=223, max=2536, avg=255.98, stdev=57.74 00:17:36.474 lat (usec): min=228, max=2544, avg=264.90, stdev=58.16 00:17:36.474 clat percentiles (usec): 00:17:36.474 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:17:36.474 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:17:36.474 | 70.00th=[ 258], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:17:36.474 | 99.00th=[ 404], 99.50th=[ 441], 99.90th=[ 502], 99.95th=[ 510], 00:17:36.474 | 99.99th=[ 2540] 00:17:36.474 write: IOPS=2099, BW=8400KiB/s (8601kB/s)(8408KiB/1001msec); 0 zone resets 00:17:36.474 slat (nsec): min=6251, max=38948, avg=13035.61, stdev=5463.13 00:17:36.474 clat (usec): min=156, max=516, avg=198.28, stdev=51.94 00:17:36.474 lat (usec): min=162, max=532, avg=211.31, stdev=51.69 00:17:36.474 clat percentiles (usec): 00:17:36.474 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:17:36.474 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:17:36.474 | 70.00th=[ 192], 80.00th=[ 212], 90.00th=[ 235], 95.00th=[ 355], 00:17:36.474 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 453], 99.95th=[ 465], 00:17:36.474 | 99.99th=[ 519] 00:17:36.474 bw ( KiB/s): min= 8192, max= 8192, per=58.26%, avg=8192.00, stdev= 0.00, samples=1 00:17:36.474 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:36.474 lat (usec) : 250=70.75%, 500=29.18%, 750=0.05% 00:17:36.474 lat (msec) : 4=0.02% 00:17:36.474 cpu : usr=2.50%, sys=4.70%, ctx=4152, majf=0, minf=2 00:17:36.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:36.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 issued rwts: total=2048,2102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:36.475 job1: (groupid=0, jobs=1): err= 0: pid=991677: Mon Jul 22 12:12:44 2024 00:17:36.475 read: IOPS=394, BW=1576KiB/s (1614kB/s)(1600KiB/1015msec) 00:17:36.475 slat (nsec): min=6237, max=23878, avg=8517.66, stdev=2212.26 00:17:36.475 clat (usec): min=244, max=41989, avg=2168.05, stdev=8444.98 00:17:36.475 lat (usec): min=252, max=42003, avg=2176.57, stdev=8446.15 00:17:36.475 clat percentiles (usec): 00:17:36.475 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:17:36.475 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:17:36.475 | 70.00th=[ 322], 80.00th=[ 465], 90.00th=[ 515], 95.00th=[ 553], 00:17:36.475 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:36.475 | 99.99th=[42206] 00:17:36.475 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:17:36.475 slat (nsec): min=8395, max=50586, avg=11486.07, stdev=3202.53 00:17:36.475 clat (usec): min=185, max=528, avg=264.57, stdev=69.07 00:17:36.475 lat (usec): min=195, max=539, avg=276.06, stdev=69.36 00:17:36.475 clat percentiles (usec): 00:17:36.475 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:17:36.475 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 239], 00:17:36.475 | 70.00th=[ 302], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 388], 00:17:36.475 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 529], 99.95th=[ 529], 00:17:36.475 | 99.99th=[ 529] 00:17:36.475 bw ( KiB/s): min= 4096, max= 4096, per=29.13%, avg=4096.00, stdev= 0.00, samples=1 00:17:36.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:36.475 lat (usec) : 250=35.42%, 500=59.10%, 750=3.51% 00:17:36.475 lat (msec) : 50=1.97% 00:17:36.475 cpu : usr=0.39%, sys=1.38%, ctx=913, majf=0, minf=1 00:17:36.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:36.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 issued rwts: total=400,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:36.475 job2: (groupid=0, jobs=1): err= 0: pid=991678: Mon Jul 22 12:12:44 2024 00:17:36.475 read: IOPS=20, BW=81.2KiB/s (83.1kB/s)(84.0KiB/1035msec) 00:17:36.475 slat (nsec): min=7201, max=39500, avg=23853.14, stdev=12015.45 00:17:36.475 clat (usec): min=40949, max=42027, avg=41513.71, stdev=495.68 00:17:36.475 lat (usec): min=40969, max=42039, avg=41537.56, stdev=493.84 00:17:36.475 clat percentiles (usec): 00:17:36.475 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:36.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:36.475 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:36.475 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:36.475 | 99.99th=[42206] 00:17:36.475 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:17:36.475 slat (nsec): min=7186, max=37806, avg=11486.92, stdev=4781.16 00:17:36.475 clat (usec): min=202, max=590, avg=302.79, stdev=83.51 00:17:36.475 lat (usec): min=210, max=605, avg=314.28, stdev=85.52 00:17:36.475 clat percentiles (usec): 00:17:36.475 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 233], 00:17:36.475 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 265], 60.00th=[ 293], 00:17:36.475 | 70.00th=[ 359], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 461], 00:17:36.475 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 594], 99.95th=[ 594], 00:17:36.475 | 99.99th=[ 594] 00:17:36.475 bw ( KiB/s): min= 4096, max= 4096, per=29.13%, avg=4096.00, stdev= 0.00, samples=1 00:17:36.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:36.475 lat (usec) : 250=38.84%, 500=55.53%, 750=1.69% 00:17:36.475 lat (msec) : 50=3.94% 00:17:36.475 cpu : usr=0.39%, sys=0.77%, ctx=534, majf=0, minf=1 00:17:36.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:36.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:36.475 job3: (groupid=0, jobs=1): err= 0: pid=991679: Mon Jul 22 12:12:44 2024 00:17:36.475 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:17:36.475 slat (nsec): min=10643, max=34324, avg=22411.62, stdev=10291.73 00:17:36.475 clat (usec): min=40864, max=41163, avg=40981.77, stdev=70.48 00:17:36.475 lat (usec): min=40876, max=41173, avg=41004.18, stdev=66.98 00:17:36.475 clat percentiles (usec): 00:17:36.475 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:36.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:36.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:36.475 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:36.475 | 99.99th=[41157] 00:17:36.475 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:17:36.475 slat (nsec): min=6016, max=37953, avg=12268.11, stdev=3832.27 00:17:36.475 clat (usec): min=183, max=479, avg=271.23, stdev=73.86 00:17:36.475 lat (usec): min=194, max=511, avg=283.50, stdev=73.88 00:17:36.475 clat percentiles (usec): 00:17:36.475 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:17:36.475 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 251], 00:17:36.475 | 70.00th=[ 318], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 396], 00:17:36.475 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 478], 99.95th=[ 478], 00:17:36.475 | 99.99th=[ 478] 00:17:36.475 bw ( KiB/s): min= 4096, max= 4096, per=29.13%, avg=4096.00, stdev= 0.00, samples=1 00:17:36.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:36.475 lat (usec) : 250=57.22%, 500=38.84% 00:17:36.475 lat (msec) : 50=3.94% 00:17:36.475 cpu : usr=0.40%, sys=0.60%, ctx=534, majf=0, minf=1 00:17:36.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:36.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.475 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:36.475 00:17:36.475 Run status group 0 (all jobs): 00:17:36.475 READ: bw=9623KiB/s (9854kB/s), 81.2KiB/s-8184KiB/s (83.1kB/s-8380kB/s), io=9960KiB (10.2MB), run=1001-1035msec 00:17:36.475 WRITE: bw=13.7MiB/s (14.4MB/s), 1979KiB/s-8400KiB/s (2026kB/s-8601kB/s), io=14.2MiB (14.9MB), run=1001-1035msec 00:17:36.475 00:17:36.475 Disk stats (read/write): 00:17:36.475 nvme0n1: ios=1558/2027, merge=0/0, ticks=1236/386, in_queue=1622, util=85.57% 00:17:36.475 nvme0n2: ios=419/512, merge=0/0, ticks=1603/134, in_queue=1737, util=89.53% 00:17:36.475 nvme0n3: ios=42/512, merge=0/0, ticks=1570/147, in_queue=1717, util=93.53% 00:17:36.475 nvme0n4: ios=41/512, merge=0/0, ticks=1603/133, in_queue=1736, util=94.32% 00:17:36.475 12:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:36.475 [global] 00:17:36.475 thread=1 00:17:36.475 invalidate=1 00:17:36.475 rw=write 00:17:36.475 time_based=1 00:17:36.475 runtime=1 00:17:36.475 ioengine=libaio 00:17:36.475 direct=1 00:17:36.475 bs=4096 00:17:36.475 iodepth=128 00:17:36.475 norandommap=0 00:17:36.475 numjobs=1 00:17:36.475 00:17:36.475 verify_dump=1 00:17:36.475 verify_backlog=512 00:17:36.475 verify_state_save=0 00:17:36.475 do_verify=1 00:17:36.475 verify=crc32c-intel 00:17:36.475 [job0] 00:17:36.475 filename=/dev/nvme0n1 00:17:36.475 [job1] 00:17:36.475 filename=/dev/nvme0n2 00:17:36.475 [job2] 00:17:36.475 filename=/dev/nvme0n3 00:17:36.475 [job3] 00:17:36.475 filename=/dev/nvme0n4 00:17:36.475 Could not set queue depth (nvme0n1) 00:17:36.475 Could not set queue depth (nvme0n2) 00:17:36.475 Could not set queue depth (nvme0n3) 00:17:36.475 Could not set queue depth (nvme0n4) 00:17:36.475 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:36.475 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:36.475 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:36.475 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:36.475 fio-3.35 00:17:36.475 Starting 4 threads 00:17:37.847 00:17:37.847 job0: (groupid=0, jobs=1): err= 0: pid=991910: Mon Jul 22 12:12:45 2024 00:17:37.847 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:37.847 slat (usec): min=2, max=10971, avg=100.04, stdev=585.09 00:17:37.847 clat (usec): min=3272, max=48068, avg=12982.42, stdev=5210.51 00:17:37.847 lat (usec): min=3290, max=48074, avg=13082.46, stdev=5251.65 00:17:37.847 clat percentiles (usec): 00:17:37.847 | 1.00th=[ 6521], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10945], 00:17:37.847 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:17:37.847 | 70.00th=[12387], 80.00th=[12911], 90.00th=[15795], 95.00th=[21365], 00:17:37.847 | 99.00th=[39584], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:17:37.847 | 99.99th=[47973] 00:17:37.847 write: IOPS=4601, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:37.847 slat (usec): min=3, max=32899, avg=108.10, stdev=867.44 00:17:37.847 clat (usec): min=379, max=55787, avg=14593.87, stdev=5806.98 00:17:37.847 lat (usec): min=3044, max=55816, avg=14701.97, stdev=5867.98 00:17:37.847 clat percentiles (usec): 00:17:37.847 | 1.00th=[ 6194], 5.00th=[10290], 10.00th=[10683], 20.00th=[11600], 00:17:37.847 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:17:37.847 | 70.00th=[13829], 80.00th=[16450], 90.00th=[22676], 95.00th=[28967], 00:17:37.847 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[37487], 00:17:37.847 | 99.99th=[55837] 00:17:37.847 bw ( KiB/s): min=16384, max=20480, per=27.26%, avg=18432.00, stdev=2896.31, samples=2 00:17:37.847 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:37.847 lat (usec) : 500=0.01% 00:17:37.847 lat (msec) : 4=0.37%, 10=6.22%, 20=83.38%, 50=10.01%, 100=0.01% 00:17:37.847 cpu : usr=5.69%, sys=7.39%, ctx=421, majf=0, minf=13 00:17:37.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:37.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.847 issued rwts: total=4608,4615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.847 job1: (groupid=0, jobs=1): err= 0: pid=991911: Mon Jul 22 12:12:45 2024 00:17:37.847 read: IOPS=4582, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:17:37.847 slat (usec): min=2, max=13651, avg=114.31, stdev=759.33 00:17:37.847 clat (usec): min=2092, max=47342, avg=14186.06, stdev=5598.29 00:17:37.847 lat (usec): min=2403, max=47347, avg=14300.37, stdev=5656.82 00:17:37.847 clat percentiles (usec): 00:17:37.847 | 1.00th=[ 3195], 5.00th=[ 6325], 10.00th=[ 9372], 20.00th=[11207], 00:17:37.847 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[14484], 00:17:37.847 | 70.00th=[15401], 80.00th=[16909], 90.00th=[21365], 95.00th=[25297], 00:17:37.847 | 99.00th=[34866], 99.50th=[36439], 99.90th=[47449], 99.95th=[47449], 00:17:37.847 | 99.99th=[47449] 00:17:37.847 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:37.847 slat (usec): min=3, max=10802, avg=91.48, stdev=513.64 00:17:37.847 clat (usec): min=331, max=70473, avg=13485.04, stdev=9183.46 00:17:37.847 lat (usec): min=883, max=70479, avg=13576.52, stdev=9233.27 00:17:37.847 clat percentiles (usec): 00:17:37.847 | 1.00th=[ 2442], 5.00th=[ 6849], 10.00th=[ 7767], 20.00th=[10028], 00:17:37.847 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:17:37.847 | 70.00th=[11863], 80.00th=[14353], 90.00th=[22676], 95.00th=[24249], 00:17:37.847 | 99.00th=[65274], 99.50th=[66323], 99.90th=[70779], 99.95th=[70779], 00:17:37.847 | 99.99th=[70779] 00:17:37.847 bw ( KiB/s): min=17352, max=19512, per=27.26%, avg=18432.00, stdev=1527.35, samples=2 00:17:37.847 iops : min= 4338, max= 4878, avg=4608.00, stdev=381.84, samples=2 00:17:37.847 lat (usec) : 500=0.01%, 1000=0.05% 00:17:37.847 lat (msec) : 2=0.29%, 4=1.77%, 10=14.09%, 20=72.32%, 50=10.25% 00:17:37.847 lat (msec) : 100=1.21% 00:17:37.847 cpu : usr=4.69%, sys=7.18%, ctx=482, majf=0, minf=7 00:17:37.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:37.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.847 issued rwts: total=4601,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.847 job2: (groupid=0, jobs=1): err= 0: pid=991912: Mon Jul 22 12:12:45 2024 00:17:37.847 read: IOPS=3730, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1004msec) 00:17:37.847 slat (usec): min=2, max=50843, avg=145.78, stdev=1287.50 00:17:37.847 clat (usec): min=764, max=128879, avg=18662.38, stdev=18874.26 00:17:37.847 lat (msec): min=3, max=128, avg=18.81, stdev=18.97 00:17:37.847 clat percentiles (msec): 00:17:37.847 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 13], 00:17:37.847 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:17:37.847 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 27], 95.00th=[ 61], 00:17:37.847 | 99.00th=[ 114], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 129], 00:17:37.847 | 99.99th=[ 129] 00:17:37.847 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:37.847 slat (usec): min=3, max=13287, avg=101.22, stdev=638.39 00:17:37.847 clat (usec): min=3446, max=40967, avg=13891.25, stdev=3343.47 00:17:37.847 lat (usec): min=3454, max=40971, avg=13992.48, stdev=3399.47 00:17:37.847 clat percentiles (usec): 00:17:37.847 | 1.00th=[ 7046], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[12256], 00:17:37.847 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:17:37.847 | 70.00th=[14091], 80.00th=[14877], 90.00th=[17171], 95.00th=[20579], 00:17:37.847 | 99.00th=[26346], 99.50th=[28443], 99.90th=[41157], 99.95th=[41157], 00:17:37.847 | 99.99th=[41157] 00:17:37.847 bw ( KiB/s): min=12288, max=20480, per=24.23%, avg=16384.00, stdev=5792.62, samples=2 00:17:37.847 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:17:37.847 lat (usec) : 1000=0.01% 00:17:37.847 lat (msec) : 2=0.01%, 4=1.22%, 10=5.17%, 20=84.62%, 50=6.53% 00:17:37.847 lat (msec) : 100=1.63%, 250=0.80% 00:17:37.847 cpu : usr=3.99%, sys=5.58%, ctx=396, majf=0, minf=21 00:17:37.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:37.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.847 issued rwts: total=3745,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.847 job3: (groupid=0, jobs=1): err= 0: pid=991913: Mon Jul 22 12:12:45 2024 00:17:37.847 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:17:37.847 slat (usec): min=2, max=19195, avg=134.49, stdev=924.07 00:17:37.848 clat (usec): min=3147, max=35479, avg=16050.12, stdev=5136.35 00:17:37.848 lat (usec): min=3153, max=35498, avg=16184.61, stdev=5196.03 00:17:37.848 clat percentiles (usec): 00:17:37.848 | 1.00th=[ 6783], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[12387], 00:17:37.848 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13960], 60.00th=[16581], 00:17:37.848 | 70.00th=[17695], 80.00th=[20579], 90.00th=[22414], 95.00th=[26346], 00:17:37.848 | 99.00th=[32375], 99.50th=[33424], 99.90th=[34866], 99.95th=[35390], 00:17:37.848 | 99.99th=[35390] 00:17:37.848 write: IOPS=3652, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1005msec); 0 zone resets 00:17:37.848 slat (usec): min=4, max=17617, avg=128.74, stdev=649.56 00:17:37.848 clat (usec): min=561, max=40760, avg=19078.16, stdev=8128.51 00:17:37.848 lat (usec): min=1331, max=42258, avg=19206.89, stdev=8191.42 00:17:37.848 clat percentiles (usec): 00:17:37.848 | 1.00th=[ 3818], 5.00th=[ 7308], 10.00th=[10814], 20.00th=[12387], 00:17:37.848 | 30.00th=[13304], 40.00th=[14746], 50.00th=[17957], 60.00th=[21103], 00:17:37.848 | 70.00th=[22938], 80.00th=[24773], 90.00th=[31851], 95.00th=[34866], 00:17:37.848 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:17:37.848 | 99.99th=[40633] 00:17:37.848 bw ( KiB/s): min=12288, max=16384, per=21.20%, avg=14336.00, stdev=2896.31, samples=2 00:17:37.848 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:37.848 lat (usec) : 750=0.01% 00:17:37.848 lat (msec) : 2=0.14%, 4=0.63%, 10=6.35%, 20=59.83%, 50=33.03% 00:17:37.848 cpu : usr=4.58%, sys=6.37%, ctx=460, majf=0, minf=9 00:17:37.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:37.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.848 issued rwts: total=3584,3671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.848 00:17:37.848 Run status group 0 (all jobs): 00:17:37.848 READ: bw=64.3MiB/s (67.4MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=64.6MiB (67.7MB), run=1003-1005msec 00:17:37.848 WRITE: bw=66.0MiB/s (69.2MB/s), 14.3MiB/s-18.0MiB/s (15.0MB/s-18.8MB/s), io=66.4MiB (69.6MB), run=1003-1005msec 00:17:37.848 00:17:37.848 Disk stats (read/write): 00:17:37.848 nvme0n1: ios=3634/4013, merge=0/0, ticks=21299/31685, in_queue=52984, util=85.47% 00:17:37.848 nvme0n2: ios=3604/4088, merge=0/0, ticks=33560/31065, in_queue=64625, util=86.59% 00:17:37.848 nvme0n3: ios=3072/3447, merge=0/0, ticks=24486/22395, in_queue=46881, util=88.91% 00:17:37.848 nvme0n4: ios=2903/3072, merge=0/0, ticks=38473/51903, in_queue=90376, util=89.46% 00:17:37.848 12:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:37.848 [global] 00:17:37.848 thread=1 00:17:37.848 invalidate=1 00:17:37.848 rw=randwrite 00:17:37.848 time_based=1 00:17:37.848 runtime=1 00:17:37.848 ioengine=libaio 00:17:37.848 direct=1 00:17:37.848 bs=4096 00:17:37.848 iodepth=128 00:17:37.848 norandommap=0 00:17:37.848 numjobs=1 00:17:37.848 00:17:37.848 verify_dump=1 00:17:37.848 verify_backlog=512 00:17:37.848 verify_state_save=0 00:17:37.848 do_verify=1 00:17:37.848 verify=crc32c-intel 00:17:37.848 [job0] 00:17:37.848 filename=/dev/nvme0n1 00:17:37.848 [job1] 00:17:37.848 filename=/dev/nvme0n2 00:17:37.848 [job2] 00:17:37.848 filename=/dev/nvme0n3 00:17:37.848 [job3] 00:17:37.848 filename=/dev/nvme0n4 00:17:37.848 Could not set queue depth (nvme0n1) 00:17:37.848 Could not set queue depth (nvme0n2) 00:17:37.848 Could not set queue depth (nvme0n3) 00:17:37.848 Could not set queue depth (nvme0n4) 00:17:38.105 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:38.105 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:38.105 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:38.105 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:38.105 fio-3.35 00:17:38.105 Starting 4 threads 00:17:39.475 00:17:39.475 job0: (groupid=0, jobs=1): err= 0: pid=992139: Mon Jul 22 12:12:46 2024 00:17:39.475 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:17:39.475 slat (usec): min=2, max=8931, avg=82.88, stdev=469.14 00:17:39.475 clat (usec): min=4396, max=20751, avg=11141.28, stdev=1684.78 00:17:39.475 lat (usec): min=4427, max=20794, avg=11224.16, stdev=1717.08 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10421], 00:17:39.475 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:17:39.475 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12911], 95.00th=[13960], 00:17:39.475 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:17:39.475 | 99.99th=[20841] 00:17:39.475 write: IOPS=5751, BW=22.5MiB/s (23.6MB/s)(22.5MiB/1002msec); 0 zone resets 00:17:39.475 slat (usec): min=3, max=9311, avg=81.24, stdev=453.09 00:17:39.475 clat (usec): min=1686, max=31141, avg=11069.05, stdev=3144.80 00:17:39.475 lat (usec): min=1694, max=31159, avg=11150.30, stdev=3173.96 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[ 4948], 5.00th=[ 7242], 10.00th=[ 8356], 20.00th=[10290], 00:17:39.475 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:17:39.475 | 70.00th=[10945], 80.00th=[11076], 90.00th=[12911], 95.00th=[18220], 00:17:39.475 | 99.00th=[23200], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:17:39.475 | 99.99th=[31065] 00:17:39.475 bw ( KiB/s): min=22112, max=23048, per=35.12%, avg=22580.00, stdev=661.85, samples=2 00:17:39.475 iops : min= 5528, max= 5762, avg=5645.00, stdev=165.46, samples=2 00:17:39.475 lat (msec) : 2=0.16%, 4=0.13%, 10=15.14%, 20=82.33%, 50=2.24% 00:17:39.475 cpu : usr=5.79%, sys=10.09%, ctx=550, majf=0, minf=15 00:17:39.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:39.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:39.475 issued rwts: total=5632,5763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:39.475 job1: (groupid=0, jobs=1): err= 0: pid=992140: Mon Jul 22 12:12:46 2024 00:17:39.475 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:17:39.475 slat (usec): min=3, max=13167, avg=149.14, stdev=900.20 00:17:39.475 clat (usec): min=9602, max=88978, avg=17992.59, stdev=8578.99 00:17:39.475 lat (usec): min=9610, max=88990, avg=18141.73, stdev=8667.81 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[11731], 5.00th=[13566], 10.00th=[13829], 20.00th=[14484], 00:17:39.475 | 30.00th=[15008], 40.00th=[15664], 50.00th=[15926], 60.00th=[16057], 00:17:39.475 | 70.00th=[16450], 80.00th=[17433], 90.00th=[23200], 95.00th=[32637], 00:17:39.475 | 99.00th=[58459], 99.50th=[61604], 99.90th=[88605], 99.95th=[88605], 00:17:39.475 | 99.99th=[88605] 00:17:39.475 write: IOPS=2245, BW=8980KiB/s (9196kB/s)(9052KiB/1008msec); 0 zone resets 00:17:39.475 slat (usec): min=4, max=23465, avg=300.12, stdev=1572.52 00:17:39.475 clat (msec): min=3, max=137, avg=39.79, stdev=32.15 00:17:39.475 lat (msec): min=10, max=137, avg=40.09, stdev=32.33 00:17:39.475 clat percentiles (msec): 00:17:39.475 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 16], 00:17:39.475 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 28], 60.00th=[ 32], 00:17:39.475 | 70.00th=[ 41], 80.00th=[ 61], 90.00th=[ 102], 95.00th=[ 123], 00:17:39.475 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:17:39.475 | 99.99th=[ 138] 00:17:39.475 bw ( KiB/s): min= 8240, max= 8840, per=13.28%, avg=8540.00, stdev=424.26, samples=2 00:17:39.475 iops : min= 2060, max= 2210, avg=2135.00, stdev=106.07, samples=2 00:17:39.475 lat (msec) : 4=0.02%, 10=0.14%, 20=56.25%, 50=30.87%, 100=7.38% 00:17:39.475 lat (msec) : 250=5.34% 00:17:39.475 cpu : usr=2.28%, sys=4.07%, ctx=228, majf=0, minf=15 00:17:39.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:17:39.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:39.475 issued rwts: total=2048,2263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:39.475 job2: (groupid=0, jobs=1): err= 0: pid=992141: Mon Jul 22 12:12:46 2024 00:17:39.475 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:17:39.475 slat (usec): min=3, max=16639, avg=144.57, stdev=952.71 00:17:39.475 clat (usec): min=7009, max=60870, avg=17032.22, stdev=6792.10 00:17:39.475 lat (usec): min=7018, max=60888, avg=17176.79, stdev=6890.48 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[ 9765], 5.00th=[12780], 10.00th=[13435], 20.00th=[13698], 00:17:39.475 | 30.00th=[14091], 40.00th=[14353], 50.00th=[15008], 60.00th=[15664], 00:17:39.475 | 70.00th=[16057], 80.00th=[17171], 90.00th=[23200], 95.00th=[31851], 00:17:39.475 | 99.00th=[49546], 99.50th=[54264], 99.90th=[61080], 99.95th=[61080], 00:17:39.475 | 99.99th=[61080] 00:17:39.475 write: IOPS=3161, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1006msec); 0 zone resets 00:17:39.475 slat (usec): min=4, max=11788, avg=165.74, stdev=855.15 00:17:39.475 clat (usec): min=1665, max=67904, avg=23641.67, stdev=15349.86 00:17:39.475 lat (usec): min=1677, max=67914, avg=23807.41, stdev=15457.42 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[ 6194], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11469], 00:17:39.475 | 30.00th=[13042], 40.00th=[13435], 50.00th=[17433], 60.00th=[23987], 00:17:39.475 | 70.00th=[26870], 80.00th=[38011], 90.00th=[47449], 95.00th=[57410], 00:17:39.475 | 99.00th=[65274], 99.50th=[66323], 99.90th=[67634], 99.95th=[67634], 00:17:39.475 | 99.99th=[67634] 00:17:39.475 bw ( KiB/s): min=10688, max=13936, per=19.15%, avg=12312.00, stdev=2296.68, samples=2 00:17:39.475 iops : min= 2672, max= 3484, avg=3078.00, stdev=574.17, samples=2 00:17:39.475 lat (msec) : 2=0.03%, 4=0.21%, 10=5.73%, 20=64.22%, 50=25.27% 00:17:39.475 lat (msec) : 100=4.54% 00:17:39.475 cpu : usr=3.98%, sys=6.07%, ctx=284, majf=0, minf=13 00:17:39.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:39.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:39.475 issued rwts: total=3072,3180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:39.475 job3: (groupid=0, jobs=1): err= 0: pid=992142: Mon Jul 22 12:12:46 2024 00:17:39.475 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:17:39.475 slat (usec): min=2, max=11317, avg=104.78, stdev=647.65 00:17:39.475 clat (usec): min=4426, max=25007, avg=13419.19, stdev=2567.30 00:17:39.475 lat (usec): min=4431, max=25027, avg=13523.97, stdev=2612.32 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[11207], 20.00th=[12125], 00:17:39.475 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:17:39.475 | 70.00th=[13698], 80.00th=[14484], 90.00th=[16057], 95.00th=[19792], 00:17:39.475 | 99.00th=[22676], 99.50th=[23725], 99.90th=[25035], 99.95th=[25035], 00:17:39.475 | 99.99th=[25035] 00:17:39.475 write: IOPS=4974, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1004msec); 0 zone resets 00:17:39.475 slat (usec): min=3, max=11020, avg=93.75, stdev=575.08 00:17:39.475 clat (usec): min=2545, max=27095, avg=13040.09, stdev=2977.47 00:17:39.475 lat (usec): min=2553, max=27141, avg=13133.84, stdev=3016.65 00:17:39.475 clat percentiles (usec): 00:17:39.475 | 1.00th=[ 4293], 5.00th=[ 7046], 10.00th=[ 9634], 20.00th=[11863], 00:17:39.475 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:17:39.475 | 70.00th=[13566], 80.00th=[14353], 90.00th=[16712], 95.00th=[17695], 00:17:39.475 | 99.00th=[20317], 99.50th=[21627], 99.90th=[23725], 99.95th=[25035], 00:17:39.475 | 99.99th=[27132] 00:17:39.475 bw ( KiB/s): min=19376, max=19560, per=30.28%, avg=19468.00, stdev=130.11, samples=2 00:17:39.475 iops : min= 4844, max= 4890, avg=4867.00, stdev=32.53, samples=2 00:17:39.475 lat (msec) : 4=0.49%, 10=6.94%, 20=89.46%, 50=3.11% 00:17:39.475 cpu : usr=5.18%, sys=8.47%, ctx=436, majf=0, minf=7 00:17:39.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:39.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:39.475 issued rwts: total=4608,4994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:39.475 00:17:39.475 Run status group 0 (all jobs): 00:17:39.475 READ: bw=59.5MiB/s (62.4MB/s), 8127KiB/s-22.0MiB/s (8322kB/s-23.0MB/s), io=60.0MiB (62.9MB), run=1002-1008msec 00:17:39.475 WRITE: bw=62.8MiB/s (65.8MB/s), 8980KiB/s-22.5MiB/s (9196kB/s-23.6MB/s), io=63.3MiB (66.4MB), run=1002-1008msec 00:17:39.475 00:17:39.475 Disk stats (read/write): 00:17:39.475 nvme0n1: ios=4649/5007, merge=0/0, ticks=24031/24053, in_queue=48084, util=98.00% 00:17:39.476 nvme0n2: ios=2074/2048, merge=0/0, ticks=18562/34453, in_queue=53015, util=97.66% 00:17:39.476 nvme0n3: ios=2231/2560, merge=0/0, ticks=38727/65975, in_queue=104702, util=98.23% 00:17:39.476 nvme0n4: ios=3994/4096, merge=0/0, ticks=36269/34329, in_queue=70598, util=98.21% 00:17:39.476 12:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:39.476 12:12:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=992278 00:17:39.476 12:12:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:39.476 12:12:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:39.476 [global] 00:17:39.476 thread=1 00:17:39.476 invalidate=1 00:17:39.476 rw=read 00:17:39.476 time_based=1 00:17:39.476 runtime=10 00:17:39.476 ioengine=libaio 00:17:39.476 direct=1 00:17:39.476 bs=4096 00:17:39.476 iodepth=1 00:17:39.476 norandommap=1 00:17:39.476 numjobs=1 00:17:39.476 00:17:39.476 [job0] 00:17:39.476 filename=/dev/nvme0n1 00:17:39.476 [job1] 00:17:39.476 filename=/dev/nvme0n2 00:17:39.476 [job2] 00:17:39.476 filename=/dev/nvme0n3 00:17:39.476 [job3] 00:17:39.476 filename=/dev/nvme0n4 00:17:39.476 Could not set queue depth (nvme0n1) 00:17:39.476 Could not set queue depth (nvme0n2) 00:17:39.476 Could not set queue depth (nvme0n3) 00:17:39.476 Could not set queue depth (nvme0n4) 00:17:39.476 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:39.476 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:39.476 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:39.476 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:39.476 fio-3.35 00:17:39.476 Starting 4 threads 00:17:42.766 12:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:42.766 12:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:42.766 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3760128, buflen=4096 00:17:42.767 fio: pid=992450, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:42.767 12:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:42.767 12:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:42.767 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=323584, buflen=4096 00:17:42.767 fio: pid=992442, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:43.024 12:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:43.024 12:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:43.024 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18890752, buflen=4096 00:17:43.024 fio: pid=992388, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:43.282 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:43.282 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:43.282 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=31305728, buflen=4096 00:17:43.282 fio: pid=992403, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:43.282 00:17:43.282 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=992388: Mon Jul 22 12:12:51 2024 00:17:43.282 read: IOPS=1360, BW=5440KiB/s (5571kB/s)(18.0MiB/3391msec) 00:17:43.282 slat (usec): min=5, max=7933, avg=19.76, stdev=116.90 00:17:43.282 clat (usec): min=237, max=42053, avg=706.04, stdev=4062.47 00:17:43.282 lat (usec): min=243, max=49987, avg=725.81, stdev=4081.44 00:17:43.282 clat percentiles (usec): 00:17:43.282 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:17:43.282 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:17:43.282 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 359], 00:17:43.282 | 99.00th=[ 1254], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:43.282 | 99.99th=[42206] 00:17:43.282 bw ( KiB/s): min= 96, max=12640, per=42.41%, avg=6137.33, stdev=5757.11, samples=6 00:17:43.282 iops : min= 24, max= 3160, avg=1534.33, stdev=1439.28, samples=6 00:17:43.282 lat (usec) : 250=2.32%, 500=95.10%, 750=1.52% 00:17:43.282 lat (msec) : 2=0.04%, 50=1.00% 00:17:43.282 cpu : usr=0.91%, sys=2.95%, ctx=4617, majf=0, minf=1 00:17:43.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.282 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.282 issued rwts: total=4613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.282 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=992403: Mon Jul 22 12:12:51 2024 00:17:43.282 read: IOPS=2086, BW=8346KiB/s (8546kB/s)(29.9MiB/3663msec) 00:17:43.282 slat (usec): min=4, max=8865, avg=17.02, stdev=125.13 00:17:43.283 clat (usec): min=231, max=43904, avg=455.07, stdev=2374.61 00:17:43.283 lat (usec): min=238, max=50946, avg=472.09, stdev=2412.64 00:17:43.283 clat percentiles (usec): 00:17:43.283 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 285], 00:17:43.283 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:17:43.283 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 441], 00:17:43.283 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[42206], 99.95th=[42206], 00:17:43.283 | 99.99th=[43779] 00:17:43.283 bw ( KiB/s): min= 94, max=12888, per=60.33%, avg=8731.14, stdev=5365.39, samples=7 00:17:43.283 iops : min= 23, max= 3222, avg=2182.71, stdev=1341.48, samples=7 00:17:43.283 lat (usec) : 250=1.60%, 500=96.86%, 750=1.18%, 1000=0.01% 00:17:43.283 lat (msec) : 2=0.01%, 50=0.33% 00:17:43.283 cpu : usr=1.64%, sys=3.71%, ctx=7650, majf=0, minf=1 00:17:43.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.283 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.283 issued rwts: total=7644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.283 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=992442: Mon Jul 22 12:12:51 2024 00:17:43.283 read: IOPS=25, BW=102KiB/s (104kB/s)(316KiB/3111msec) 00:17:43.283 slat (usec): min=11, max=3863, avg=67.55, stdev=429.82 00:17:43.283 clat (usec): min=343, max=42011, avg=39020.15, stdev=8974.89 00:17:43.283 lat (usec): min=356, max=44962, avg=39088.39, stdev=8997.28 00:17:43.283 clat percentiles (usec): 00:17:43.283 | 1.00th=[ 343], 5.00th=[ 635], 10.00th=[40633], 20.00th=[41157], 00:17:43.283 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:43.283 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:17:43.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:43.283 | 99.99th=[42206] 00:17:43.283 bw ( KiB/s): min= 96, max= 104, per=0.70%, avg=101.33, stdev= 4.13, samples=6 00:17:43.283 iops : min= 24, max= 26, avg=25.33, stdev= 1.03, samples=6 00:17:43.283 lat (usec) : 500=3.75%, 750=1.25% 00:17:43.283 lat (msec) : 50=93.75% 00:17:43.283 cpu : usr=0.06%, sys=0.00%, ctx=82, majf=0, minf=1 00:17:43.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.283 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.283 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.283 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=992450: Mon Jul 22 12:12:51 2024 00:17:43.283 read: IOPS=319, BW=1277KiB/s (1308kB/s)(3672KiB/2875msec) 00:17:43.283 slat (nsec): min=4801, max=40152, avg=10175.23, stdev=5929.82 00:17:43.283 clat (usec): min=243, max=42051, avg=3088.52, stdev=10331.42 00:17:43.283 lat (usec): min=251, max=42068, avg=3098.67, stdev=10334.72 00:17:43.283 clat percentiles (usec): 00:17:43.283 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:17:43.283 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 306], 00:17:43.283 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[41157], 00:17:43.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:43.283 | 99.99th=[42206] 00:17:43.283 bw ( KiB/s): min= 96, max= 104, per=0.67%, avg=97.60, stdev= 3.58, samples=5 00:17:43.283 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:43.283 lat (usec) : 250=1.96%, 500=90.86%, 750=0.22% 00:17:43.283 lat (msec) : 4=0.11%, 50=6.75% 00:17:43.283 cpu : usr=0.07%, sys=0.42%, ctx=920, majf=0, minf=1 00:17:43.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.283 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.283 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.283 00:17:43.283 Run status group 0 (all jobs): 00:17:43.283 READ: bw=14.1MiB/s (14.8MB/s), 102KiB/s-8346KiB/s (104kB/s-8546kB/s), io=51.8MiB (54.3MB), run=2875-3663msec 00:17:43.283 00:17:43.283 Disk stats (read/write): 00:17:43.283 nvme0n1: ios=4654/0, merge=0/0, ticks=4322/0, in_queue=4322, util=99.37% 00:17:43.283 nvme0n2: ios=7683/0, merge=0/0, ticks=4372/0, in_queue=4372, util=99.28% 00:17:43.283 nvme0n3: ios=130/0, merge=0/0, ticks=4179/0, in_queue=4179, util=99.66% 00:17:43.283 nvme0n4: ios=969/0, merge=0/0, ticks=3940/0, in_queue=3940, util=99.52% 00:17:43.541 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:43.541 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:43.799 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:43.799 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:44.056 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:44.057 12:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:44.314 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:44.314 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 992278 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:44.571 nvmf hotplug test: fio failed as expected 00:17:44.571 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.829 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.829 rmmod nvme_tcp 00:17:44.829 rmmod nvme_fabrics 00:17:45.086 rmmod nvme_keyring 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 990381 ']' 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 990381 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 990381 ']' 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 990381 00:17:45.086 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 990381 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 990381' 00:17:45.087 killing process with pid 990381 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 990381 00:17:45.087 12:12:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 990381 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.346 12:12:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.249 12:12:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.249 00:17:47.249 real 0m23.256s 00:17:47.249 user 1m20.323s 00:17:47.249 sys 0m6.735s 00:17:47.249 12:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.249 12:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.249 ************************************ 00:17:47.249 END TEST nvmf_fio_target 00:17:47.249 ************************************ 00:17:47.249 12:12:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:47.249 12:12:55 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:47.249 12:12:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:47.249 12:12:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.249 12:12:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.249 ************************************ 00:17:47.250 START TEST nvmf_bdevio 00:17:47.250 ************************************ 00:17:47.250 12:12:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:47.250 * Looking for test storage... 00:17:47.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.250 12:12:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.508 12:12:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.419 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:17:49.420 00:17:49.420 --- 10.0.0.2 ping statistics --- 00:17:49.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.420 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:17:49.420 00:17:49.420 --- 10.0.0.1 ping statistics --- 00:17:49.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.420 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=994989 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 994989 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 994989 ']' 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.420 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.420 [2024-07-22 12:12:57.328602] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:17:49.420 [2024-07-22 12:12:57.328700] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.711 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.711 [2024-07-22 12:12:57.370024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:49.711 [2024-07-22 12:12:57.396376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.711 [2024-07-22 12:12:57.486300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.711 [2024-07-22 12:12:57.486367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.712 [2024-07-22 12:12:57.486380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.712 [2024-07-22 12:12:57.486391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.712 [2024-07-22 12:12:57.486400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.712 [2024-07-22 12:12:57.486497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.712 [2024-07-22 12:12:57.486544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:49.712 [2024-07-22 12:12:57.486634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:49.712 [2024-07-22 12:12:57.486707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.712 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.712 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:49.712 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.712 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.712 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.971 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 [2024-07-22 12:12:57.640546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 Malloc0 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 [2024-07-22 12:12:57.691524] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:49.972 { 00:17:49.972 "params": { 00:17:49.972 "name": "Nvme$subsystem", 00:17:49.972 "trtype": "$TEST_TRANSPORT", 00:17:49.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.972 "adrfam": "ipv4", 00:17:49.972 "trsvcid": "$NVMF_PORT", 00:17:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.972 "hdgst": ${hdgst:-false}, 00:17:49.972 "ddgst": ${ddgst:-false} 00:17:49.972 }, 00:17:49.972 "method": "bdev_nvme_attach_controller" 00:17:49.972 } 00:17:49.972 EOF 00:17:49.972 )") 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:49.972 12:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:49.972 "params": { 00:17:49.972 "name": "Nvme1", 00:17:49.972 "trtype": "tcp", 00:17:49.972 "traddr": "10.0.0.2", 00:17:49.972 "adrfam": "ipv4", 00:17:49.972 "trsvcid": "4420", 00:17:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.972 "hdgst": false, 00:17:49.972 "ddgst": false 00:17:49.972 }, 00:17:49.972 "method": "bdev_nvme_attach_controller" 00:17:49.972 }' 00:17:49.972 [2024-07-22 12:12:57.734951] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:17:49.972 [2024-07-22 12:12:57.735044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995136 ] 00:17:49.972 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.972 [2024-07-22 12:12:57.768200] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:49.972 [2024-07-22 12:12:57.797441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:49.972 [2024-07-22 12:12:57.886669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.972 [2024-07-22 12:12:57.886722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.972 [2024-07-22 12:12:57.886725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.540 I/O targets: 00:17:50.540 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:50.540 00:17:50.540 00:17:50.540 CUnit - A unit testing framework for C - Version 2.1-3 00:17:50.540 http://cunit.sourceforge.net/ 00:17:50.540 00:17:50.540 00:17:50.540 Suite: bdevio tests on: Nvme1n1 00:17:50.540 Test: blockdev write read block ...passed 00:17:50.540 Test: blockdev write zeroes read block ...passed 00:17:50.540 Test: blockdev write zeroes read no split ...passed 00:17:50.540 Test: blockdev write zeroes read split ...passed 00:17:50.540 Test: blockdev write zeroes read split partial ...passed 00:17:50.540 Test: blockdev reset ...[2024-07-22 12:12:58.385526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.540 [2024-07-22 12:12:58.385649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f9940 (9): Bad file descriptor 00:17:50.540 [2024-07-22 12:12:58.403778] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.540 passed 00:17:50.540 Test: blockdev write read 8 blocks ...passed 00:17:50.540 Test: blockdev write read size > 128k ...passed 00:17:50.540 Test: blockdev write read invalid size ...passed 00:17:50.800 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:50.800 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:50.800 Test: blockdev write read max offset ...passed 00:17:50.800 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:50.800 Test: blockdev writev readv 8 blocks ...passed 00:17:50.800 Test: blockdev writev readv 30 x 1block ...passed 00:17:50.800 Test: blockdev writev readv block ...passed 00:17:50.800 Test: blockdev writev readv size > 128k ...passed 00:17:50.800 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:50.800 Test: blockdev comparev and writev ...[2024-07-22 12:12:58.619561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.619603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.619637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.619665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.620023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.620048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.620071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.620087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.620450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.620472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.620488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.620834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.620859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.620881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.800 [2024-07-22 12:12:58.620897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.800 passed 00:17:50.800 Test: blockdev nvme passthru rw ...passed 00:17:50.800 Test: blockdev nvme passthru vendor specific ...[2024-07-22 12:12:58.703937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.800 [2024-07-22 12:12:58.703964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.704153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.800 [2024-07-22 12:12:58.704177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.704363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.800 [2024-07-22 12:12:58.704386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.800 [2024-07-22 12:12:58.704564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.800 [2024-07-22 12:12:58.704588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.800 passed 00:17:50.800 Test: blockdev nvme admin passthru ...passed 00:17:51.060 Test: blockdev copy ...passed 00:17:51.060 00:17:51.060 Run Summary: Type Total Ran Passed Failed Inactive 00:17:51.060 suites 1 1 n/a 0 0 00:17:51.060 tests 23 23 23 0 0 00:17:51.060 asserts 152 152 152 0 n/a 00:17:51.060 00:17:51.060 Elapsed time = 1.145 seconds 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.060 12:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.060 rmmod nvme_tcp 00:17:51.060 rmmod nvme_fabrics 00:17:51.319 rmmod nvme_keyring 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 994989 ']' 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 994989 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 994989 ']' 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 994989 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994989 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994989' 00:17:51.319 killing process with pid 994989 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 994989 00:17:51.319 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 994989 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.578 12:12:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.480 12:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.480 00:17:53.480 real 0m6.216s 00:17:53.480 user 0m10.274s 00:17:53.480 sys 0m2.016s 00:17:53.480 12:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:53.480 12:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:53.480 ************************************ 00:17:53.480 END TEST nvmf_bdevio 00:17:53.480 ************************************ 00:17:53.480 12:13:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:53.480 12:13:01 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:53.480 12:13:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:53.480 12:13:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.480 12:13:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:53.480 ************************************ 00:17:53.480 START TEST nvmf_auth_target 00:17:53.480 ************************************ 00:17:53.480 12:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:53.738 * Looking for test storage... 00:17:53.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.738 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.739 12:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.635 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.636 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:17:55.895 00:17:55.895 --- 10.0.0.2 ping statistics --- 00:17:55.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.895 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:17:55.895 00:17:55.895 --- 10.0.0.1 ping statistics --- 00:17:55.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.895 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=997203 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 997203 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 997203 ']' 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.895 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=997222 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:56.153 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aa4bfec28a48f025152ce8bd69734e9f28dbabd84800312a 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ivo 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aa4bfec28a48f025152ce8bd69734e9f28dbabd84800312a 0 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aa4bfec28a48f025152ce8bd69734e9f28dbabd84800312a 0 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aa4bfec28a48f025152ce8bd69734e9f28dbabd84800312a 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:56.154 12:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ivo 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ivo 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Ivo 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d34b85078cc648ea240555a554d70d86956c7a018326eb8008991ae3aec2368c 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VTX 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d34b85078cc648ea240555a554d70d86956c7a018326eb8008991ae3aec2368c 3 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d34b85078cc648ea240555a554d70d86956c7a018326eb8008991ae3aec2368c 3 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d34b85078cc648ea240555a554d70d86956c7a018326eb8008991ae3aec2368c 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VTX 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VTX 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.VTX 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c62ead4ef18c0e5746e121c8562b8835 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iSh 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c62ead4ef18c0e5746e121c8562b8835 1 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c62ead4ef18c0e5746e121c8562b8835 1 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c62ead4ef18c0e5746e121c8562b8835 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:56.154 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iSh 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iSh 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.iSh 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f9d71d8af7b33b1ee52f9f55bb0437ff7774197620ed746f 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tr6 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f9d71d8af7b33b1ee52f9f55bb0437ff7774197620ed746f 2 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f9d71d8af7b33b1ee52f9f55bb0437ff7774197620ed746f 2 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f9d71d8af7b33b1ee52f9f55bb0437ff7774197620ed746f 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tr6 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tr6 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Tr6 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aa068e3aec0cc446c63b936942dac6c031c82092c8e3b9dc 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.H07 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aa068e3aec0cc446c63b936942dac6c031c82092c8e3b9dc 2 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aa068e3aec0cc446c63b936942dac6c031c82092c8e3b9dc 2 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aa068e3aec0cc446c63b936942dac6c031c82092c8e3b9dc 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.H07 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.H07 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.H07 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=156640d8937632cf2b6589474bb20938 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.n9j 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 156640d8937632cf2b6589474bb20938 1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 156640d8937632cf2b6589474bb20938 1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=156640d8937632cf2b6589474bb20938 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.n9j 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.n9j 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.n9j 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=64597f3055991d823610951e239b386a2537dbe190b6dec5543451cf7dd6a651 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ozR 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 64597f3055991d823610951e239b386a2537dbe190b6dec5543451cf7dd6a651 3 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 64597f3055991d823610951e239b386a2537dbe190b6dec5543451cf7dd6a651 3 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=64597f3055991d823610951e239b386a2537dbe190b6dec5543451cf7dd6a651 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:56.412 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ozR 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ozR 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ozR 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 997203 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 997203 ']' 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.670 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 997222 /var/tmp/host.sock 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 997222 ']' 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:56.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.928 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ivo 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ivo 00:17:57.187 12:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ivo 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.VTX ]] 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VTX 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VTX 00:17:57.445 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VTX 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iSh 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.iSh 00:17:57.705 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.iSh 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Tr6 ]] 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tr6 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tr6 00:17:57.962 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tr6 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.H07 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.H07 00:17:58.219 12:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.H07 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.n9j ]] 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n9j 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n9j 00:17:58.219 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n9j 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ozR 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ozR 00:17:58.477 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ozR 00:17:58.765 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:58.765 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:58.765 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.765 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.765 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.765 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.021 12:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.587 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.587 12:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.844 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.844 { 00:17:59.845 "cntlid": 1, 00:17:59.845 "qid": 0, 00:17:59.845 "state": "enabled", 00:17:59.845 "thread": "nvmf_tgt_poll_group_000", 00:17:59.845 "listen_address": { 00:17:59.845 "trtype": "TCP", 00:17:59.845 "adrfam": "IPv4", 00:17:59.845 "traddr": "10.0.0.2", 00:17:59.845 "trsvcid": "4420" 00:17:59.845 }, 00:17:59.845 "peer_address": { 00:17:59.845 "trtype": "TCP", 00:17:59.845 "adrfam": "IPv4", 00:17:59.845 "traddr": "10.0.0.1", 00:17:59.845 "trsvcid": "35792" 00:17:59.845 }, 00:17:59.845 "auth": { 00:17:59.845 "state": "completed", 00:17:59.845 "digest": "sha256", 00:17:59.845 "dhgroup": "null" 00:17:59.845 } 00:17:59.845 } 00:17:59.845 ]' 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.845 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.102 12:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.038 12:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.296 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.554 00:18:01.554 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.554 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.554 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.811 { 00:18:01.811 "cntlid": 3, 00:18:01.811 "qid": 0, 00:18:01.811 "state": "enabled", 00:18:01.811 "thread": "nvmf_tgt_poll_group_000", 00:18:01.811 "listen_address": { 00:18:01.811 "trtype": "TCP", 00:18:01.811 "adrfam": "IPv4", 00:18:01.811 "traddr": "10.0.0.2", 00:18:01.811 "trsvcid": "4420" 00:18:01.811 }, 00:18:01.811 "peer_address": { 00:18:01.811 "trtype": "TCP", 00:18:01.811 "adrfam": "IPv4", 00:18:01.811 "traddr": "10.0.0.1", 00:18:01.811 "trsvcid": "35826" 00:18:01.811 }, 00:18:01.811 "auth": { 00:18:01.811 "state": "completed", 00:18:01.811 "digest": "sha256", 00:18:01.811 "dhgroup": "null" 00:18:01.811 } 00:18:01.811 } 00:18:01.811 ]' 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.811 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.069 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.069 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.069 12:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.328 12:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.263 12:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.555 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.812 00:18:03.812 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.812 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.812 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.069 { 00:18:04.069 "cntlid": 5, 00:18:04.069 "qid": 0, 00:18:04.069 "state": "enabled", 00:18:04.069 "thread": "nvmf_tgt_poll_group_000", 00:18:04.069 "listen_address": { 00:18:04.069 "trtype": "TCP", 00:18:04.069 "adrfam": "IPv4", 00:18:04.069 "traddr": "10.0.0.2", 00:18:04.069 "trsvcid": "4420" 00:18:04.069 }, 00:18:04.069 "peer_address": { 00:18:04.069 "trtype": "TCP", 00:18:04.069 "adrfam": "IPv4", 00:18:04.069 "traddr": "10.0.0.1", 00:18:04.069 "trsvcid": "35844" 00:18:04.069 }, 00:18:04.069 "auth": { 00:18:04.069 "state": "completed", 00:18:04.069 "digest": "sha256", 00:18:04.069 "dhgroup": "null" 00:18:04.069 } 00:18:04.069 } 00:18:04.069 ]' 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.069 12:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.327 12:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.262 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.520 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.087 00:18:06.087 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.087 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.087 12:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.345 { 00:18:06.345 "cntlid": 7, 00:18:06.345 "qid": 0, 00:18:06.345 "state": "enabled", 00:18:06.345 "thread": "nvmf_tgt_poll_group_000", 00:18:06.345 "listen_address": { 00:18:06.345 "trtype": "TCP", 00:18:06.345 "adrfam": "IPv4", 00:18:06.345 "traddr": "10.0.0.2", 00:18:06.345 "trsvcid": "4420" 00:18:06.345 }, 00:18:06.345 "peer_address": { 00:18:06.345 "trtype": "TCP", 00:18:06.345 "adrfam": "IPv4", 00:18:06.345 "traddr": "10.0.0.1", 00:18:06.345 "trsvcid": "35870" 00:18:06.345 }, 00:18:06.345 "auth": { 00:18:06.345 "state": "completed", 00:18:06.345 "digest": "sha256", 00:18:06.345 "dhgroup": "null" 00:18:06.345 } 00:18:06.345 } 00:18:06.345 ]' 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.345 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.604 12:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.542 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.800 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.059 00:18:08.059 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.059 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.059 12:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.317 { 00:18:08.317 "cntlid": 9, 00:18:08.317 "qid": 0, 00:18:08.317 "state": "enabled", 00:18:08.317 "thread": "nvmf_tgt_poll_group_000", 00:18:08.317 "listen_address": { 00:18:08.317 "trtype": "TCP", 00:18:08.317 "adrfam": "IPv4", 00:18:08.317 "traddr": "10.0.0.2", 00:18:08.317 "trsvcid": "4420" 00:18:08.317 }, 00:18:08.317 "peer_address": { 00:18:08.317 "trtype": "TCP", 00:18:08.317 "adrfam": "IPv4", 00:18:08.317 "traddr": "10.0.0.1", 00:18:08.317 "trsvcid": "35902" 00:18:08.317 }, 00:18:08.317 "auth": { 00:18:08.317 "state": "completed", 00:18:08.317 "digest": "sha256", 00:18:08.317 "dhgroup": "ffdhe2048" 00:18:08.317 } 00:18:08.317 } 00:18:08.317 ]' 00:18:08.317 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.576 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.834 12:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.769 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.770 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.029 12:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.287 00:18:10.287 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.287 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.287 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.546 { 00:18:10.546 "cntlid": 11, 00:18:10.546 "qid": 0, 00:18:10.546 "state": "enabled", 00:18:10.546 "thread": "nvmf_tgt_poll_group_000", 00:18:10.546 "listen_address": { 00:18:10.546 "trtype": "TCP", 00:18:10.546 "adrfam": "IPv4", 00:18:10.546 "traddr": "10.0.0.2", 00:18:10.546 "trsvcid": "4420" 00:18:10.546 }, 00:18:10.546 "peer_address": { 00:18:10.546 "trtype": "TCP", 00:18:10.546 "adrfam": "IPv4", 00:18:10.546 "traddr": "10.0.0.1", 00:18:10.546 "trsvcid": "43706" 00:18:10.546 }, 00:18:10.546 "auth": { 00:18:10.546 "state": "completed", 00:18:10.546 "digest": "sha256", 00:18:10.546 "dhgroup": "ffdhe2048" 00:18:10.546 } 00:18:10.546 } 00:18:10.546 ]' 00:18:10.546 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.804 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.062 12:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.994 12:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.252 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.509 00:18:12.509 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.509 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.509 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.767 { 00:18:12.767 "cntlid": 13, 00:18:12.767 "qid": 0, 00:18:12.767 "state": "enabled", 00:18:12.767 "thread": "nvmf_tgt_poll_group_000", 00:18:12.767 "listen_address": { 00:18:12.767 "trtype": "TCP", 00:18:12.767 "adrfam": "IPv4", 00:18:12.767 "traddr": "10.0.0.2", 00:18:12.767 "trsvcid": "4420" 00:18:12.767 }, 00:18:12.767 "peer_address": { 00:18:12.767 "trtype": "TCP", 00:18:12.767 "adrfam": "IPv4", 00:18:12.767 "traddr": "10.0.0.1", 00:18:12.767 "trsvcid": "43734" 00:18:12.767 }, 00:18:12.767 "auth": { 00:18:12.767 "state": "completed", 00:18:12.767 "digest": "sha256", 00:18:12.767 "dhgroup": "ffdhe2048" 00:18:12.767 } 00:18:12.767 } 00:18:12.767 ]' 00:18:12.767 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.025 12:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.282 12:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.221 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.479 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.046 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.046 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.304 12:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.304 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.304 { 00:18:15.304 "cntlid": 15, 00:18:15.304 "qid": 0, 00:18:15.304 "state": "enabled", 00:18:15.304 "thread": "nvmf_tgt_poll_group_000", 00:18:15.304 "listen_address": { 00:18:15.304 "trtype": "TCP", 00:18:15.304 "adrfam": "IPv4", 00:18:15.304 "traddr": "10.0.0.2", 00:18:15.304 "trsvcid": "4420" 00:18:15.304 }, 00:18:15.304 "peer_address": { 00:18:15.304 "trtype": "TCP", 00:18:15.304 "adrfam": "IPv4", 00:18:15.304 "traddr": "10.0.0.1", 00:18:15.304 "trsvcid": "43752" 00:18:15.304 }, 00:18:15.304 "auth": { 00:18:15.304 "state": "completed", 00:18:15.304 "digest": "sha256", 00:18:15.304 "dhgroup": "ffdhe2048" 00:18:15.304 } 00:18:15.304 } 00:18:15.304 ]' 00:18:15.304 12:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.304 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.560 12:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.495 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.752 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.753 12:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.753 12:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.753 12:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.753 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.753 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.352 00:18:17.352 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.352 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.352 12:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.352 { 00:18:17.352 "cntlid": 17, 00:18:17.352 "qid": 0, 00:18:17.352 "state": "enabled", 00:18:17.352 "thread": "nvmf_tgt_poll_group_000", 00:18:17.352 "listen_address": { 00:18:17.352 "trtype": "TCP", 00:18:17.352 "adrfam": "IPv4", 00:18:17.352 "traddr": "10.0.0.2", 00:18:17.352 "trsvcid": "4420" 00:18:17.352 }, 00:18:17.352 "peer_address": { 00:18:17.352 "trtype": "TCP", 00:18:17.352 "adrfam": "IPv4", 00:18:17.352 "traddr": "10.0.0.1", 00:18:17.352 "trsvcid": "43782" 00:18:17.352 }, 00:18:17.352 "auth": { 00:18:17.352 "state": "completed", 00:18:17.352 "digest": "sha256", 00:18:17.352 "dhgroup": "ffdhe3072" 00:18:17.352 } 00:18:17.352 } 00:18:17.352 ]' 00:18:17.352 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.609 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.865 12:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:18.798 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.799 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.057 12:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.314 00:18:19.314 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.314 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.314 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.571 { 00:18:19.571 "cntlid": 19, 00:18:19.571 "qid": 0, 00:18:19.571 "state": "enabled", 00:18:19.571 "thread": "nvmf_tgt_poll_group_000", 00:18:19.571 "listen_address": { 00:18:19.571 "trtype": "TCP", 00:18:19.571 "adrfam": "IPv4", 00:18:19.571 "traddr": "10.0.0.2", 00:18:19.571 "trsvcid": "4420" 00:18:19.571 }, 00:18:19.571 "peer_address": { 00:18:19.571 "trtype": "TCP", 00:18:19.571 "adrfam": "IPv4", 00:18:19.571 "traddr": "10.0.0.1", 00:18:19.571 "trsvcid": "56974" 00:18:19.571 }, 00:18:19.571 "auth": { 00:18:19.571 "state": "completed", 00:18:19.571 "digest": "sha256", 00:18:19.571 "dhgroup": "ffdhe3072" 00:18:19.571 } 00:18:19.571 } 00:18:19.571 ]' 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.571 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.828 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.828 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.828 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.828 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.828 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.085 12:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.018 12:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.275 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.531 00:18:21.531 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.531 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.531 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.788 { 00:18:21.788 "cntlid": 21, 00:18:21.788 "qid": 0, 00:18:21.788 "state": "enabled", 00:18:21.788 "thread": "nvmf_tgt_poll_group_000", 00:18:21.788 "listen_address": { 00:18:21.788 "trtype": "TCP", 00:18:21.788 "adrfam": "IPv4", 00:18:21.788 "traddr": "10.0.0.2", 00:18:21.788 "trsvcid": "4420" 00:18:21.788 }, 00:18:21.788 "peer_address": { 00:18:21.788 "trtype": "TCP", 00:18:21.788 "adrfam": "IPv4", 00:18:21.788 "traddr": "10.0.0.1", 00:18:21.788 "trsvcid": "57016" 00:18:21.788 }, 00:18:21.788 "auth": { 00:18:21.788 "state": "completed", 00:18:21.788 "digest": "sha256", 00:18:21.788 "dhgroup": "ffdhe3072" 00:18:21.788 } 00:18:21.788 } 00:18:21.788 ]' 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.788 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.045 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.045 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.045 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.045 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.045 12:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.303 12:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.237 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.495 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.752 00:18:23.752 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.752 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.752 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.010 { 00:18:24.010 "cntlid": 23, 00:18:24.010 "qid": 0, 00:18:24.010 "state": "enabled", 00:18:24.010 "thread": "nvmf_tgt_poll_group_000", 00:18:24.010 "listen_address": { 00:18:24.010 "trtype": "TCP", 00:18:24.010 "adrfam": "IPv4", 00:18:24.010 "traddr": "10.0.0.2", 00:18:24.010 "trsvcid": "4420" 00:18:24.010 }, 00:18:24.010 "peer_address": { 00:18:24.010 "trtype": "TCP", 00:18:24.010 "adrfam": "IPv4", 00:18:24.010 "traddr": "10.0.0.1", 00:18:24.010 "trsvcid": "57046" 00:18:24.010 }, 00:18:24.010 "auth": { 00:18:24.010 "state": "completed", 00:18:24.010 "digest": "sha256", 00:18:24.010 "dhgroup": "ffdhe3072" 00:18:24.010 } 00:18:24.010 } 00:18:24.010 ]' 00:18:24.010 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.276 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.276 12:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.276 12:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.276 12:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.276 12:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.276 12:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.276 12:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.545 12:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.478 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.736 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.302 00:18:26.302 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.302 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.302 12:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.559 { 00:18:26.559 "cntlid": 25, 00:18:26.559 "qid": 0, 00:18:26.559 "state": "enabled", 00:18:26.559 "thread": "nvmf_tgt_poll_group_000", 00:18:26.559 "listen_address": { 00:18:26.559 "trtype": "TCP", 00:18:26.559 "adrfam": "IPv4", 00:18:26.559 "traddr": "10.0.0.2", 00:18:26.559 "trsvcid": "4420" 00:18:26.559 }, 00:18:26.559 "peer_address": { 00:18:26.559 "trtype": "TCP", 00:18:26.559 "adrfam": "IPv4", 00:18:26.559 "traddr": "10.0.0.1", 00:18:26.559 "trsvcid": "57082" 00:18:26.559 }, 00:18:26.559 "auth": { 00:18:26.559 "state": "completed", 00:18:26.559 "digest": "sha256", 00:18:26.559 "dhgroup": "ffdhe4096" 00:18:26.559 } 00:18:26.559 } 00:18:26.559 ]' 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.559 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.816 12:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.751 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.008 12:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.574 00:18:28.574 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.574 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.574 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.831 { 00:18:28.831 "cntlid": 27, 00:18:28.831 "qid": 0, 00:18:28.831 "state": "enabled", 00:18:28.831 "thread": "nvmf_tgt_poll_group_000", 00:18:28.831 "listen_address": { 00:18:28.831 "trtype": "TCP", 00:18:28.831 "adrfam": "IPv4", 00:18:28.831 "traddr": "10.0.0.2", 00:18:28.831 "trsvcid": "4420" 00:18:28.831 }, 00:18:28.831 "peer_address": { 00:18:28.831 "trtype": "TCP", 00:18:28.831 "adrfam": "IPv4", 00:18:28.831 "traddr": "10.0.0.1", 00:18:28.831 "trsvcid": "57106" 00:18:28.831 }, 00:18:28.831 "auth": { 00:18:28.831 "state": "completed", 00:18:28.831 "digest": "sha256", 00:18:28.831 "dhgroup": "ffdhe4096" 00:18:28.831 } 00:18:28.831 } 00:18:28.831 ]' 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.831 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.088 12:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.018 12:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.275 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:30.275 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.275 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.275 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:30.275 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.275 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.276 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.276 12:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.276 12:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.276 12:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.276 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.276 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.846 00:18:30.846 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.846 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.846 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.149 { 00:18:31.149 "cntlid": 29, 00:18:31.149 "qid": 0, 00:18:31.149 "state": "enabled", 00:18:31.149 "thread": "nvmf_tgt_poll_group_000", 00:18:31.149 "listen_address": { 00:18:31.149 "trtype": "TCP", 00:18:31.149 "adrfam": "IPv4", 00:18:31.149 "traddr": "10.0.0.2", 00:18:31.149 "trsvcid": "4420" 00:18:31.149 }, 00:18:31.149 "peer_address": { 00:18:31.149 "trtype": "TCP", 00:18:31.149 "adrfam": "IPv4", 00:18:31.149 "traddr": "10.0.0.1", 00:18:31.149 "trsvcid": "49602" 00:18:31.149 }, 00:18:31.149 "auth": { 00:18:31.149 "state": "completed", 00:18:31.149 "digest": "sha256", 00:18:31.149 "dhgroup": "ffdhe4096" 00:18:31.149 } 00:18:31.149 } 00:18:31.149 ]' 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.149 12:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.408 12:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.347 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.622 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.189 00:18:33.189 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.189 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.189 12:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.189 { 00:18:33.189 "cntlid": 31, 00:18:33.189 "qid": 0, 00:18:33.189 "state": "enabled", 00:18:33.189 "thread": "nvmf_tgt_poll_group_000", 00:18:33.189 "listen_address": { 00:18:33.189 "trtype": "TCP", 00:18:33.189 "adrfam": "IPv4", 00:18:33.189 "traddr": "10.0.0.2", 00:18:33.189 "trsvcid": "4420" 00:18:33.189 }, 00:18:33.189 "peer_address": { 00:18:33.189 "trtype": "TCP", 00:18:33.189 "adrfam": "IPv4", 00:18:33.189 "traddr": "10.0.0.1", 00:18:33.189 "trsvcid": "49632" 00:18:33.189 }, 00:18:33.189 "auth": { 00:18:33.189 "state": "completed", 00:18:33.189 "digest": "sha256", 00:18:33.189 "dhgroup": "ffdhe4096" 00:18:33.189 } 00:18:33.189 } 00:18:33.189 ]' 00:18:33.189 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.448 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.706 12:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.638 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.639 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.895 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:34.895 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.895 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.895 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:34.895 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.895 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.896 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.896 12:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.896 12:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.896 12:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.896 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.896 12:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.462 00:18:35.462 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.462 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.462 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.718 { 00:18:35.718 "cntlid": 33, 00:18:35.718 "qid": 0, 00:18:35.718 "state": "enabled", 00:18:35.718 "thread": "nvmf_tgt_poll_group_000", 00:18:35.718 "listen_address": { 00:18:35.718 "trtype": "TCP", 00:18:35.718 "adrfam": "IPv4", 00:18:35.718 "traddr": "10.0.0.2", 00:18:35.718 "trsvcid": "4420" 00:18:35.718 }, 00:18:35.718 "peer_address": { 00:18:35.718 "trtype": "TCP", 00:18:35.718 "adrfam": "IPv4", 00:18:35.718 "traddr": "10.0.0.1", 00:18:35.718 "trsvcid": "49658" 00:18:35.718 }, 00:18:35.718 "auth": { 00:18:35.718 "state": "completed", 00:18:35.718 "digest": "sha256", 00:18:35.718 "dhgroup": "ffdhe6144" 00:18:35.718 } 00:18:35.718 } 00:18:35.718 ]' 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.718 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.976 12:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.350 12:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.350 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.918 00:18:37.918 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.918 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.918 12:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.175 { 00:18:38.175 "cntlid": 35, 00:18:38.175 "qid": 0, 00:18:38.175 "state": "enabled", 00:18:38.175 "thread": "nvmf_tgt_poll_group_000", 00:18:38.175 "listen_address": { 00:18:38.175 "trtype": "TCP", 00:18:38.175 "adrfam": "IPv4", 00:18:38.175 "traddr": "10.0.0.2", 00:18:38.175 "trsvcid": "4420" 00:18:38.175 }, 00:18:38.175 "peer_address": { 00:18:38.175 "trtype": "TCP", 00:18:38.175 "adrfam": "IPv4", 00:18:38.175 "traddr": "10.0.0.1", 00:18:38.175 "trsvcid": "49680" 00:18:38.175 }, 00:18:38.175 "auth": { 00:18:38.175 "state": "completed", 00:18:38.175 "digest": "sha256", 00:18:38.175 "dhgroup": "ffdhe6144" 00:18:38.175 } 00:18:38.175 } 00:18:38.175 ]' 00:18:38.175 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.432 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.690 12:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.625 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.884 12:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.449 00:18:40.449 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.449 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.449 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.708 { 00:18:40.708 "cntlid": 37, 00:18:40.708 "qid": 0, 00:18:40.708 "state": "enabled", 00:18:40.708 "thread": "nvmf_tgt_poll_group_000", 00:18:40.708 "listen_address": { 00:18:40.708 "trtype": "TCP", 00:18:40.708 "adrfam": "IPv4", 00:18:40.708 "traddr": "10.0.0.2", 00:18:40.708 "trsvcid": "4420" 00:18:40.708 }, 00:18:40.708 "peer_address": { 00:18:40.708 "trtype": "TCP", 00:18:40.708 "adrfam": "IPv4", 00:18:40.708 "traddr": "10.0.0.1", 00:18:40.708 "trsvcid": "42596" 00:18:40.708 }, 00:18:40.708 "auth": { 00:18:40.708 "state": "completed", 00:18:40.708 "digest": "sha256", 00:18:40.708 "dhgroup": "ffdhe6144" 00:18:40.708 } 00:18:40.708 } 00:18:40.708 ]' 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.708 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.966 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.966 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.966 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.966 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.966 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.224 12:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:42.162 12:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.437 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.004 00:18:43.004 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.004 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.004 12:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.262 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.262 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.262 12:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.262 12:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.262 12:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.262 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.262 { 00:18:43.262 "cntlid": 39, 00:18:43.262 "qid": 0, 00:18:43.262 "state": "enabled", 00:18:43.262 "thread": "nvmf_tgt_poll_group_000", 00:18:43.262 "listen_address": { 00:18:43.262 "trtype": "TCP", 00:18:43.262 "adrfam": "IPv4", 00:18:43.262 "traddr": "10.0.0.2", 00:18:43.262 "trsvcid": "4420" 00:18:43.262 }, 00:18:43.262 "peer_address": { 00:18:43.262 "trtype": "TCP", 00:18:43.263 "adrfam": "IPv4", 00:18:43.263 "traddr": "10.0.0.1", 00:18:43.263 "trsvcid": "42622" 00:18:43.263 }, 00:18:43.263 "auth": { 00:18:43.263 "state": "completed", 00:18:43.263 "digest": "sha256", 00:18:43.263 "dhgroup": "ffdhe6144" 00:18:43.263 } 00:18:43.263 } 00:18:43.263 ]' 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.263 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.522 12:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:18:44.453 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.712 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.010 12:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.941 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.941 { 00:18:45.941 "cntlid": 41, 00:18:45.941 "qid": 0, 00:18:45.941 "state": "enabled", 00:18:45.941 "thread": "nvmf_tgt_poll_group_000", 00:18:45.941 "listen_address": { 00:18:45.941 "trtype": "TCP", 00:18:45.941 "adrfam": "IPv4", 00:18:45.941 "traddr": "10.0.0.2", 00:18:45.941 "trsvcid": "4420" 00:18:45.941 }, 00:18:45.941 "peer_address": { 00:18:45.941 "trtype": "TCP", 00:18:45.941 "adrfam": "IPv4", 00:18:45.941 "traddr": "10.0.0.1", 00:18:45.941 "trsvcid": "42656" 00:18:45.941 }, 00:18:45.941 "auth": { 00:18:45.941 "state": "completed", 00:18:45.941 "digest": "sha256", 00:18:45.941 "dhgroup": "ffdhe8192" 00:18:45.941 } 00:18:45.941 } 00:18:45.941 ]' 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.941 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.200 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.200 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.200 12:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.460 12:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.392 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.649 12:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.579 00:18:48.579 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.579 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.579 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.837 { 00:18:48.837 "cntlid": 43, 00:18:48.837 "qid": 0, 00:18:48.837 "state": "enabled", 00:18:48.837 "thread": "nvmf_tgt_poll_group_000", 00:18:48.837 "listen_address": { 00:18:48.837 "trtype": "TCP", 00:18:48.837 "adrfam": "IPv4", 00:18:48.837 "traddr": "10.0.0.2", 00:18:48.837 "trsvcid": "4420" 00:18:48.837 }, 00:18:48.837 "peer_address": { 00:18:48.837 "trtype": "TCP", 00:18:48.837 "adrfam": "IPv4", 00:18:48.837 "traddr": "10.0.0.1", 00:18:48.837 "trsvcid": "42674" 00:18:48.837 }, 00:18:48.837 "auth": { 00:18:48.837 "state": "completed", 00:18:48.837 "digest": "sha256", 00:18:48.837 "dhgroup": "ffdhe8192" 00:18:48.837 } 00:18:48.837 } 00:18:48.837 ]' 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.837 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.095 12:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.353 12:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.287 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.544 12:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.493 00:18:51.493 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.493 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.493 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.749 { 00:18:51.749 "cntlid": 45, 00:18:51.749 "qid": 0, 00:18:51.749 "state": "enabled", 00:18:51.749 "thread": "nvmf_tgt_poll_group_000", 00:18:51.749 "listen_address": { 00:18:51.749 "trtype": "TCP", 00:18:51.749 "adrfam": "IPv4", 00:18:51.749 "traddr": "10.0.0.2", 00:18:51.749 "trsvcid": "4420" 00:18:51.749 }, 00:18:51.749 "peer_address": { 00:18:51.749 "trtype": "TCP", 00:18:51.749 "adrfam": "IPv4", 00:18:51.749 "traddr": "10.0.0.1", 00:18:51.749 "trsvcid": "56926" 00:18:51.749 }, 00:18:51.749 "auth": { 00:18:51.749 "state": "completed", 00:18:51.749 "digest": "sha256", 00:18:51.749 "dhgroup": "ffdhe8192" 00:18:51.749 } 00:18:51.749 } 00:18:51.749 ]' 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.749 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.006 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.006 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.006 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.006 12:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:18:52.936 12:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.936 12:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.936 12:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.936 12:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 12:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 12:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.206 12:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.206 12:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.206 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.135 00:18:54.135 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.135 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.135 12:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.391 { 00:18:54.391 "cntlid": 47, 00:18:54.391 "qid": 0, 00:18:54.391 "state": "enabled", 00:18:54.391 "thread": "nvmf_tgt_poll_group_000", 00:18:54.391 "listen_address": { 00:18:54.391 "trtype": "TCP", 00:18:54.391 "adrfam": "IPv4", 00:18:54.391 "traddr": "10.0.0.2", 00:18:54.391 "trsvcid": "4420" 00:18:54.391 }, 00:18:54.391 "peer_address": { 00:18:54.391 "trtype": "TCP", 00:18:54.391 "adrfam": "IPv4", 00:18:54.391 "traddr": "10.0.0.1", 00:18:54.391 "trsvcid": "56954" 00:18:54.391 }, 00:18:54.391 "auth": { 00:18:54.391 "state": "completed", 00:18:54.391 "digest": "sha256", 00:18:54.391 "dhgroup": "ffdhe8192" 00:18:54.391 } 00:18:54.391 } 00:18:54.391 ]' 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.391 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.647 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.647 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.647 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.904 12:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.834 12:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.091 12:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.091 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.091 12:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.348 00:18:56.348 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.348 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.348 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.605 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.605 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.605 12:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.606 { 00:18:56.606 "cntlid": 49, 00:18:56.606 "qid": 0, 00:18:56.606 "state": "enabled", 00:18:56.606 "thread": "nvmf_tgt_poll_group_000", 00:18:56.606 "listen_address": { 00:18:56.606 "trtype": "TCP", 00:18:56.606 "adrfam": "IPv4", 00:18:56.606 "traddr": "10.0.0.2", 00:18:56.606 "trsvcid": "4420" 00:18:56.606 }, 00:18:56.606 "peer_address": { 00:18:56.606 "trtype": "TCP", 00:18:56.606 "adrfam": "IPv4", 00:18:56.606 "traddr": "10.0.0.1", 00:18:56.606 "trsvcid": "56988" 00:18:56.606 }, 00:18:56.606 "auth": { 00:18:56.606 "state": "completed", 00:18:56.606 "digest": "sha384", 00:18:56.606 "dhgroup": "null" 00:18:56.606 } 00:18:56.606 } 00:18:56.606 ]' 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.606 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.864 12:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:57.796 12:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.360 12:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.361 12:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.361 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.361 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.618 00:18:58.618 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.618 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.618 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.874 { 00:18:58.874 "cntlid": 51, 00:18:58.874 "qid": 0, 00:18:58.874 "state": "enabled", 00:18:58.874 "thread": "nvmf_tgt_poll_group_000", 00:18:58.874 "listen_address": { 00:18:58.874 "trtype": "TCP", 00:18:58.874 "adrfam": "IPv4", 00:18:58.874 "traddr": "10.0.0.2", 00:18:58.874 "trsvcid": "4420" 00:18:58.874 }, 00:18:58.874 "peer_address": { 00:18:58.874 "trtype": "TCP", 00:18:58.874 "adrfam": "IPv4", 00:18:58.874 "traddr": "10.0.0.1", 00:18:58.874 "trsvcid": "57004" 00:18:58.874 }, 00:18:58.874 "auth": { 00:18:58.874 "state": "completed", 00:18:58.874 "digest": "sha384", 00:18:58.874 "dhgroup": "null" 00:18:58.874 } 00:18:58.874 } 00:18:58.874 ]' 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.874 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.157 12:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:00.094 12:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.350 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.946 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.946 { 00:19:00.946 "cntlid": 53, 00:19:00.946 "qid": 0, 00:19:00.946 "state": "enabled", 00:19:00.946 "thread": "nvmf_tgt_poll_group_000", 00:19:00.946 "listen_address": { 00:19:00.946 "trtype": "TCP", 00:19:00.946 "adrfam": "IPv4", 00:19:00.946 "traddr": "10.0.0.2", 00:19:00.946 "trsvcid": "4420" 00:19:00.946 }, 00:19:00.946 "peer_address": { 00:19:00.946 "trtype": "TCP", 00:19:00.946 "adrfam": "IPv4", 00:19:00.946 "traddr": "10.0.0.1", 00:19:00.946 "trsvcid": "47664" 00:19:00.946 }, 00:19:00.946 "auth": { 00:19:00.946 "state": "completed", 00:19:00.946 "digest": "sha384", 00:19:00.946 "dhgroup": "null" 00:19:00.946 } 00:19:00.946 } 00:19:00.946 ]' 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.946 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.203 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.203 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.203 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.203 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.203 12:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.460 12:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:02.390 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:02.647 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:02.647 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.648 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.905 00:19:02.905 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.905 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.905 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.163 { 00:19:03.163 "cntlid": 55, 00:19:03.163 "qid": 0, 00:19:03.163 "state": "enabled", 00:19:03.163 "thread": "nvmf_tgt_poll_group_000", 00:19:03.163 "listen_address": { 00:19:03.163 "trtype": "TCP", 00:19:03.163 "adrfam": "IPv4", 00:19:03.163 "traddr": "10.0.0.2", 00:19:03.163 "trsvcid": "4420" 00:19:03.163 }, 00:19:03.163 "peer_address": { 00:19:03.163 "trtype": "TCP", 00:19:03.163 "adrfam": "IPv4", 00:19:03.163 "traddr": "10.0.0.1", 00:19:03.163 "trsvcid": "47696" 00:19:03.163 }, 00:19:03.163 "auth": { 00:19:03.163 "state": "completed", 00:19:03.163 "digest": "sha384", 00:19:03.163 "dhgroup": "null" 00:19:03.163 } 00:19:03.163 } 00:19:03.163 ]' 00:19:03.163 12:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.163 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.163 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.163 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.163 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.420 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.420 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.420 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.420 12:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.353 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.609 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.610 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.173 00:19:05.174 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.174 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.174 12:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.430 { 00:19:05.430 "cntlid": 57, 00:19:05.430 "qid": 0, 00:19:05.430 "state": "enabled", 00:19:05.430 "thread": "nvmf_tgt_poll_group_000", 00:19:05.430 "listen_address": { 00:19:05.430 "trtype": "TCP", 00:19:05.430 "adrfam": "IPv4", 00:19:05.430 "traddr": "10.0.0.2", 00:19:05.430 "trsvcid": "4420" 00:19:05.430 }, 00:19:05.430 "peer_address": { 00:19:05.430 "trtype": "TCP", 00:19:05.430 "adrfam": "IPv4", 00:19:05.430 "traddr": "10.0.0.1", 00:19:05.430 "trsvcid": "47726" 00:19:05.430 }, 00:19:05.430 "auth": { 00:19:05.430 "state": "completed", 00:19:05.430 "digest": "sha384", 00:19:05.430 "dhgroup": "ffdhe2048" 00:19:05.430 } 00:19:05.430 } 00:19:05.430 ]' 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.430 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.686 12:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.618 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.875 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.132 00:19:07.132 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.132 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.132 12:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.389 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.389 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.389 12:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.389 12:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.389 12:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.389 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.389 { 00:19:07.389 "cntlid": 59, 00:19:07.389 "qid": 0, 00:19:07.389 "state": "enabled", 00:19:07.389 "thread": "nvmf_tgt_poll_group_000", 00:19:07.389 "listen_address": { 00:19:07.389 "trtype": "TCP", 00:19:07.390 "adrfam": "IPv4", 00:19:07.390 "traddr": "10.0.0.2", 00:19:07.390 "trsvcid": "4420" 00:19:07.390 }, 00:19:07.390 "peer_address": { 00:19:07.390 "trtype": "TCP", 00:19:07.390 "adrfam": "IPv4", 00:19:07.390 "traddr": "10.0.0.1", 00:19:07.390 "trsvcid": "47748" 00:19:07.390 }, 00:19:07.390 "auth": { 00:19:07.390 "state": "completed", 00:19:07.390 "digest": "sha384", 00:19:07.390 "dhgroup": "ffdhe2048" 00:19:07.390 } 00:19:07.390 } 00:19:07.390 ]' 00:19:07.390 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.390 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.390 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.646 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.646 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.646 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.646 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.646 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.903 12:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.835 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.092 12:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.350 00:19:09.350 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.350 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.350 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.608 { 00:19:09.608 "cntlid": 61, 00:19:09.608 "qid": 0, 00:19:09.608 "state": "enabled", 00:19:09.608 "thread": "nvmf_tgt_poll_group_000", 00:19:09.608 "listen_address": { 00:19:09.608 "trtype": "TCP", 00:19:09.608 "adrfam": "IPv4", 00:19:09.608 "traddr": "10.0.0.2", 00:19:09.608 "trsvcid": "4420" 00:19:09.608 }, 00:19:09.608 "peer_address": { 00:19:09.608 "trtype": "TCP", 00:19:09.608 "adrfam": "IPv4", 00:19:09.608 "traddr": "10.0.0.1", 00:19:09.608 "trsvcid": "39756" 00:19:09.608 }, 00:19:09.608 "auth": { 00:19:09.608 "state": "completed", 00:19:09.608 "digest": "sha384", 00:19:09.608 "dhgroup": "ffdhe2048" 00:19:09.608 } 00:19:09.608 } 00:19:09.608 ]' 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.608 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.866 12:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.800 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.058 12:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.624 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.624 12:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.883 { 00:19:11.883 "cntlid": 63, 00:19:11.883 "qid": 0, 00:19:11.883 "state": "enabled", 00:19:11.883 "thread": "nvmf_tgt_poll_group_000", 00:19:11.883 "listen_address": { 00:19:11.883 "trtype": "TCP", 00:19:11.883 "adrfam": "IPv4", 00:19:11.883 "traddr": "10.0.0.2", 00:19:11.883 "trsvcid": "4420" 00:19:11.883 }, 00:19:11.883 "peer_address": { 00:19:11.883 "trtype": "TCP", 00:19:11.883 "adrfam": "IPv4", 00:19:11.883 "traddr": "10.0.0.1", 00:19:11.883 "trsvcid": "39790" 00:19:11.883 }, 00:19:11.883 "auth": { 00:19:11.883 "state": "completed", 00:19:11.883 "digest": "sha384", 00:19:11.883 "dhgroup": "ffdhe2048" 00:19:11.883 } 00:19:11.883 } 00:19:11.883 ]' 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.883 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.142 12:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:13.114 12:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.370 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.628 00:19:13.628 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.628 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.628 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.884 { 00:19:13.884 "cntlid": 65, 00:19:13.884 "qid": 0, 00:19:13.884 "state": "enabled", 00:19:13.884 "thread": "nvmf_tgt_poll_group_000", 00:19:13.884 "listen_address": { 00:19:13.884 "trtype": "TCP", 00:19:13.884 "adrfam": "IPv4", 00:19:13.884 "traddr": "10.0.0.2", 00:19:13.884 "trsvcid": "4420" 00:19:13.884 }, 00:19:13.884 "peer_address": { 00:19:13.884 "trtype": "TCP", 00:19:13.884 "adrfam": "IPv4", 00:19:13.884 "traddr": "10.0.0.1", 00:19:13.884 "trsvcid": "39826" 00:19:13.884 }, 00:19:13.884 "auth": { 00:19:13.884 "state": "completed", 00:19:13.884 "digest": "sha384", 00:19:13.884 "dhgroup": "ffdhe3072" 00:19:13.884 } 00:19:13.884 } 00:19:13.884 ]' 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.884 12:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.140 12:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:19:15.071 12:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:15.327 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.584 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.840 00:19:15.840 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.840 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.840 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.097 { 00:19:16.097 "cntlid": 67, 00:19:16.097 "qid": 0, 00:19:16.097 "state": "enabled", 00:19:16.097 "thread": "nvmf_tgt_poll_group_000", 00:19:16.097 "listen_address": { 00:19:16.097 "trtype": "TCP", 00:19:16.097 "adrfam": "IPv4", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "trsvcid": "4420" 00:19:16.097 }, 00:19:16.097 "peer_address": { 00:19:16.097 "trtype": "TCP", 00:19:16.097 "adrfam": "IPv4", 00:19:16.097 "traddr": "10.0.0.1", 00:19:16.097 "trsvcid": "39856" 00:19:16.097 }, 00:19:16.097 "auth": { 00:19:16.097 "state": "completed", 00:19:16.097 "digest": "sha384", 00:19:16.097 "dhgroup": "ffdhe3072" 00:19:16.097 } 00:19:16.097 } 00:19:16.097 ]' 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.097 12:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.354 12:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:17.285 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.285 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.286 12:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.286 12:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.286 12:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.286 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.286 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.286 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.542 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.105 00:19:18.105 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.105 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.106 12:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.362 { 00:19:18.362 "cntlid": 69, 00:19:18.362 "qid": 0, 00:19:18.362 "state": "enabled", 00:19:18.362 "thread": "nvmf_tgt_poll_group_000", 00:19:18.362 "listen_address": { 00:19:18.362 "trtype": "TCP", 00:19:18.362 "adrfam": "IPv4", 00:19:18.362 "traddr": "10.0.0.2", 00:19:18.362 "trsvcid": "4420" 00:19:18.362 }, 00:19:18.362 "peer_address": { 00:19:18.362 "trtype": "TCP", 00:19:18.362 "adrfam": "IPv4", 00:19:18.362 "traddr": "10.0.0.1", 00:19:18.362 "trsvcid": "39894" 00:19:18.362 }, 00:19:18.362 "auth": { 00:19:18.362 "state": "completed", 00:19:18.362 "digest": "sha384", 00:19:18.362 "dhgroup": "ffdhe3072" 00:19:18.362 } 00:19:18.362 } 00:19:18.362 ]' 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.362 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.363 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.363 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.363 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.363 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.620 12:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.552 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.810 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.068 00:19:20.068 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.068 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.068 12:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.325 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.325 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.325 12:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.325 12:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.325 12:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.325 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.325 { 00:19:20.325 "cntlid": 71, 00:19:20.325 "qid": 0, 00:19:20.325 "state": "enabled", 00:19:20.325 "thread": "nvmf_tgt_poll_group_000", 00:19:20.326 "listen_address": { 00:19:20.326 "trtype": "TCP", 00:19:20.326 "adrfam": "IPv4", 00:19:20.326 "traddr": "10.0.0.2", 00:19:20.326 "trsvcid": "4420" 00:19:20.326 }, 00:19:20.326 "peer_address": { 00:19:20.326 "trtype": "TCP", 00:19:20.326 "adrfam": "IPv4", 00:19:20.326 "traddr": "10.0.0.1", 00:19:20.326 "trsvcid": "60270" 00:19:20.326 }, 00:19:20.326 "auth": { 00:19:20.326 "state": "completed", 00:19:20.326 "digest": "sha384", 00:19:20.326 "dhgroup": "ffdhe3072" 00:19:20.326 } 00:19:20.326 } 00:19:20.326 ]' 00:19:20.326 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.583 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.840 12:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.771 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.029 12:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.286 00:19:22.286 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.286 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.286 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.543 { 00:19:22.543 "cntlid": 73, 00:19:22.543 "qid": 0, 00:19:22.543 "state": "enabled", 00:19:22.543 "thread": "nvmf_tgt_poll_group_000", 00:19:22.543 "listen_address": { 00:19:22.543 "trtype": "TCP", 00:19:22.543 "adrfam": "IPv4", 00:19:22.543 "traddr": "10.0.0.2", 00:19:22.543 "trsvcid": "4420" 00:19:22.543 }, 00:19:22.543 "peer_address": { 00:19:22.543 "trtype": "TCP", 00:19:22.543 "adrfam": "IPv4", 00:19:22.543 "traddr": "10.0.0.1", 00:19:22.543 "trsvcid": "60302" 00:19:22.543 }, 00:19:22.543 "auth": { 00:19:22.543 "state": "completed", 00:19:22.543 "digest": "sha384", 00:19:22.543 "dhgroup": "ffdhe4096" 00:19:22.543 } 00:19:22.543 } 00:19:22.543 ]' 00:19:22.543 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.800 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.115 12:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.045 12:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.302 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.559 00:19:24.559 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.559 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.559 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.816 { 00:19:24.816 "cntlid": 75, 00:19:24.816 "qid": 0, 00:19:24.816 "state": "enabled", 00:19:24.816 "thread": "nvmf_tgt_poll_group_000", 00:19:24.816 "listen_address": { 00:19:24.816 "trtype": "TCP", 00:19:24.816 "adrfam": "IPv4", 00:19:24.816 "traddr": "10.0.0.2", 00:19:24.816 "trsvcid": "4420" 00:19:24.816 }, 00:19:24.816 "peer_address": { 00:19:24.816 "trtype": "TCP", 00:19:24.816 "adrfam": "IPv4", 00:19:24.816 "traddr": "10.0.0.1", 00:19:24.816 "trsvcid": "60338" 00:19:24.816 }, 00:19:24.816 "auth": { 00:19:24.816 "state": "completed", 00:19:24.816 "digest": "sha384", 00:19:24.816 "dhgroup": "ffdhe4096" 00:19:24.816 } 00:19:24.816 } 00:19:24.816 ]' 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.816 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.073 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.073 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.073 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.073 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.073 12:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.329 12:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.261 12:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.536 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.830 00:19:26.830 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.830 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.830 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.087 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.087 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.087 12:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.087 12:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.087 12:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.087 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.087 { 00:19:27.087 "cntlid": 77, 00:19:27.087 "qid": 0, 00:19:27.087 "state": "enabled", 00:19:27.087 "thread": "nvmf_tgt_poll_group_000", 00:19:27.087 "listen_address": { 00:19:27.087 "trtype": "TCP", 00:19:27.087 "adrfam": "IPv4", 00:19:27.087 "traddr": "10.0.0.2", 00:19:27.087 "trsvcid": "4420" 00:19:27.087 }, 00:19:27.088 "peer_address": { 00:19:27.088 "trtype": "TCP", 00:19:27.088 "adrfam": "IPv4", 00:19:27.088 "traddr": "10.0.0.1", 00:19:27.088 "trsvcid": "60374" 00:19:27.088 }, 00:19:27.088 "auth": { 00:19:27.088 "state": "completed", 00:19:27.088 "digest": "sha384", 00:19:27.088 "dhgroup": "ffdhe4096" 00:19:27.088 } 00:19:27.088 } 00:19:27.088 ]' 00:19:27.088 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.088 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.088 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.088 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.088 12:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.088 12:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.088 12:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.088 12:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.346 12:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.719 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.975 00:19:29.231 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.231 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.231 12:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.231 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.231 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.231 12:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.231 12:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.488 { 00:19:29.488 "cntlid": 79, 00:19:29.488 "qid": 0, 00:19:29.488 "state": "enabled", 00:19:29.488 "thread": "nvmf_tgt_poll_group_000", 00:19:29.488 "listen_address": { 00:19:29.488 "trtype": "TCP", 00:19:29.488 "adrfam": "IPv4", 00:19:29.488 "traddr": "10.0.0.2", 00:19:29.488 "trsvcid": "4420" 00:19:29.488 }, 00:19:29.488 "peer_address": { 00:19:29.488 "trtype": "TCP", 00:19:29.488 "adrfam": "IPv4", 00:19:29.488 "traddr": "10.0.0.1", 00:19:29.488 "trsvcid": "45314" 00:19:29.488 }, 00:19:29.488 "auth": { 00:19:29.488 "state": "completed", 00:19:29.488 "digest": "sha384", 00:19:29.488 "dhgroup": "ffdhe4096" 00:19:29.488 } 00:19:29.488 } 00:19:29.488 ]' 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.488 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.744 12:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.675 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.933 12:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.497 00:19:31.497 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.497 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.497 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.754 { 00:19:31.754 "cntlid": 81, 00:19:31.754 "qid": 0, 00:19:31.754 "state": "enabled", 00:19:31.754 "thread": "nvmf_tgt_poll_group_000", 00:19:31.754 "listen_address": { 00:19:31.754 "trtype": "TCP", 00:19:31.754 "adrfam": "IPv4", 00:19:31.754 "traddr": "10.0.0.2", 00:19:31.754 "trsvcid": "4420" 00:19:31.754 }, 00:19:31.754 "peer_address": { 00:19:31.754 "trtype": "TCP", 00:19:31.754 "adrfam": "IPv4", 00:19:31.754 "traddr": "10.0.0.1", 00:19:31.754 "trsvcid": "45336" 00:19:31.754 }, 00:19:31.754 "auth": { 00:19:31.754 "state": "completed", 00:19:31.754 "digest": "sha384", 00:19:31.754 "dhgroup": "ffdhe6144" 00:19:31.754 } 00:19:31.754 } 00:19:31.754 ]' 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.754 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.319 12:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.251 12:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.510 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.076 00:19:34.076 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.076 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.076 12:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.334 { 00:19:34.334 "cntlid": 83, 00:19:34.334 "qid": 0, 00:19:34.334 "state": "enabled", 00:19:34.334 "thread": "nvmf_tgt_poll_group_000", 00:19:34.334 "listen_address": { 00:19:34.334 "trtype": "TCP", 00:19:34.334 "adrfam": "IPv4", 00:19:34.334 "traddr": "10.0.0.2", 00:19:34.334 "trsvcid": "4420" 00:19:34.334 }, 00:19:34.334 "peer_address": { 00:19:34.334 "trtype": "TCP", 00:19:34.334 "adrfam": "IPv4", 00:19:34.334 "traddr": "10.0.0.1", 00:19:34.334 "trsvcid": "45356" 00:19:34.334 }, 00:19:34.334 "auth": { 00:19:34.334 "state": "completed", 00:19:34.334 "digest": "sha384", 00:19:34.334 "dhgroup": "ffdhe6144" 00:19:34.334 } 00:19:34.334 } 00:19:34.334 ]' 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.334 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.591 12:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.524 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.781 12:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.345 00:19:36.345 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.345 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.602 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.859 { 00:19:36.859 "cntlid": 85, 00:19:36.859 "qid": 0, 00:19:36.859 "state": "enabled", 00:19:36.859 "thread": "nvmf_tgt_poll_group_000", 00:19:36.859 "listen_address": { 00:19:36.859 "trtype": "TCP", 00:19:36.859 "adrfam": "IPv4", 00:19:36.859 "traddr": "10.0.0.2", 00:19:36.859 "trsvcid": "4420" 00:19:36.859 }, 00:19:36.859 "peer_address": { 00:19:36.859 "trtype": "TCP", 00:19:36.859 "adrfam": "IPv4", 00:19:36.859 "traddr": "10.0.0.1", 00:19:36.859 "trsvcid": "45384" 00:19:36.859 }, 00:19:36.859 "auth": { 00:19:36.859 "state": "completed", 00:19:36.859 "digest": "sha384", 00:19:36.859 "dhgroup": "ffdhe6144" 00:19:36.859 } 00:19:36.859 } 00:19:36.859 ]' 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.859 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.116 12:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.048 12:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.305 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:38.306 12:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.306 12:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.306 12:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.306 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.306 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.868 00:19:38.868 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.868 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.868 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.125 { 00:19:39.125 "cntlid": 87, 00:19:39.125 "qid": 0, 00:19:39.125 "state": "enabled", 00:19:39.125 "thread": "nvmf_tgt_poll_group_000", 00:19:39.125 "listen_address": { 00:19:39.125 "trtype": "TCP", 00:19:39.125 "adrfam": "IPv4", 00:19:39.125 "traddr": "10.0.0.2", 00:19:39.125 "trsvcid": "4420" 00:19:39.125 }, 00:19:39.125 "peer_address": { 00:19:39.125 "trtype": "TCP", 00:19:39.125 "adrfam": "IPv4", 00:19:39.125 "traddr": "10.0.0.1", 00:19:39.125 "trsvcid": "45414" 00:19:39.125 }, 00:19:39.125 "auth": { 00:19:39.125 "state": "completed", 00:19:39.125 "digest": "sha384", 00:19:39.125 "dhgroup": "ffdhe6144" 00:19:39.125 } 00:19:39.125 } 00:19:39.125 ]' 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.125 12:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.125 12:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.125 12:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.125 12:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.125 12:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.125 12:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.383 12:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.751 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.752 12:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.699 00:19:41.699 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.699 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.699 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.956 { 00:19:41.956 "cntlid": 89, 00:19:41.956 "qid": 0, 00:19:41.956 "state": "enabled", 00:19:41.956 "thread": "nvmf_tgt_poll_group_000", 00:19:41.956 "listen_address": { 00:19:41.956 "trtype": "TCP", 00:19:41.956 "adrfam": "IPv4", 00:19:41.956 "traddr": "10.0.0.2", 00:19:41.956 "trsvcid": "4420" 00:19:41.956 }, 00:19:41.956 "peer_address": { 00:19:41.956 "trtype": "TCP", 00:19:41.956 "adrfam": "IPv4", 00:19:41.956 "traddr": "10.0.0.1", 00:19:41.956 "trsvcid": "33524" 00:19:41.956 }, 00:19:41.956 "auth": { 00:19:41.956 "state": "completed", 00:19:41.956 "digest": "sha384", 00:19:41.956 "dhgroup": "ffdhe8192" 00:19:41.956 } 00:19:41.956 } 00:19:41.956 ]' 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.956 12:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.213 12:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:43.141 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:43.398 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:43.398 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.398 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.398 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.398 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.398 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.399 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.399 12:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 12:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 12:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.399 12:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.328 00:19:44.328 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.328 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.328 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.585 { 00:19:44.585 "cntlid": 91, 00:19:44.585 "qid": 0, 00:19:44.585 "state": "enabled", 00:19:44.585 "thread": "nvmf_tgt_poll_group_000", 00:19:44.585 "listen_address": { 00:19:44.585 "trtype": "TCP", 00:19:44.585 "adrfam": "IPv4", 00:19:44.585 "traddr": "10.0.0.2", 00:19:44.585 "trsvcid": "4420" 00:19:44.585 }, 00:19:44.585 "peer_address": { 00:19:44.585 "trtype": "TCP", 00:19:44.585 "adrfam": "IPv4", 00:19:44.585 "traddr": "10.0.0.1", 00:19:44.585 "trsvcid": "33564" 00:19:44.585 }, 00:19:44.585 "auth": { 00:19:44.585 "state": "completed", 00:19:44.585 "digest": "sha384", 00:19:44.585 "dhgroup": "ffdhe8192" 00:19:44.585 } 00:19:44.585 } 00:19:44.585 ]' 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.585 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.842 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.842 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.842 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.100 12:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.029 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.295 12:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.295 12:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.295 12:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.295 12:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.225 00:19:47.225 12:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.225 12:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.225 12:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.225 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.225 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.225 12:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.225 12:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.225 12:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.225 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.225 { 00:19:47.225 "cntlid": 93, 00:19:47.225 "qid": 0, 00:19:47.225 "state": "enabled", 00:19:47.225 "thread": "nvmf_tgt_poll_group_000", 00:19:47.225 "listen_address": { 00:19:47.225 "trtype": "TCP", 00:19:47.225 "adrfam": "IPv4", 00:19:47.225 "traddr": "10.0.0.2", 00:19:47.225 "trsvcid": "4420" 00:19:47.225 }, 00:19:47.225 "peer_address": { 00:19:47.225 "trtype": "TCP", 00:19:47.225 "adrfam": "IPv4", 00:19:47.225 "traddr": "10.0.0.1", 00:19:47.225 "trsvcid": "33574" 00:19:47.225 }, 00:19:47.225 "auth": { 00:19:47.226 "state": "completed", 00:19:47.226 "digest": "sha384", 00:19:47.226 "dhgroup": "ffdhe8192" 00:19:47.226 } 00:19:47.226 } 00:19:47.226 ]' 00:19:47.226 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.482 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.739 12:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.669 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.927 12:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.858 00:19:49.858 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.858 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.858 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.115 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.115 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.115 12:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.115 12:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.115 12:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.115 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.115 { 00:19:50.115 "cntlid": 95, 00:19:50.115 "qid": 0, 00:19:50.115 "state": "enabled", 00:19:50.115 "thread": "nvmf_tgt_poll_group_000", 00:19:50.115 "listen_address": { 00:19:50.115 "trtype": "TCP", 00:19:50.115 "adrfam": "IPv4", 00:19:50.115 "traddr": "10.0.0.2", 00:19:50.115 "trsvcid": "4420" 00:19:50.115 }, 00:19:50.115 "peer_address": { 00:19:50.116 "trtype": "TCP", 00:19:50.116 "adrfam": "IPv4", 00:19:50.116 "traddr": "10.0.0.1", 00:19:50.116 "trsvcid": "34244" 00:19:50.116 }, 00:19:50.116 "auth": { 00:19:50.116 "state": "completed", 00:19:50.116 "digest": "sha384", 00:19:50.116 "dhgroup": "ffdhe8192" 00:19:50.116 } 00:19:50.116 } 00:19:50.116 ]' 00:19:50.116 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.116 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.116 12:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.116 12:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.116 12:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.373 12:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.373 12:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.373 12:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.631 12:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.561 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.818 12:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.819 12:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.819 12:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.819 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.819 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.076 00:19:52.076 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.076 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.076 12:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.334 { 00:19:52.334 "cntlid": 97, 00:19:52.334 "qid": 0, 00:19:52.334 "state": "enabled", 00:19:52.334 "thread": "nvmf_tgt_poll_group_000", 00:19:52.334 "listen_address": { 00:19:52.334 "trtype": "TCP", 00:19:52.334 "adrfam": "IPv4", 00:19:52.334 "traddr": "10.0.0.2", 00:19:52.334 "trsvcid": "4420" 00:19:52.334 }, 00:19:52.334 "peer_address": { 00:19:52.334 "trtype": "TCP", 00:19:52.334 "adrfam": "IPv4", 00:19:52.334 "traddr": "10.0.0.1", 00:19:52.334 "trsvcid": "34254" 00:19:52.334 }, 00:19:52.334 "auth": { 00:19:52.334 "state": "completed", 00:19:52.334 "digest": "sha512", 00:19:52.334 "dhgroup": "null" 00:19:52.334 } 00:19:52.334 } 00:19:52.334 ]' 00:19:52.334 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.597 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.853 12:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.779 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.035 12:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.292 00:19:54.292 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.292 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.292 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.549 { 00:19:54.549 "cntlid": 99, 00:19:54.549 "qid": 0, 00:19:54.549 "state": "enabled", 00:19:54.549 "thread": "nvmf_tgt_poll_group_000", 00:19:54.549 "listen_address": { 00:19:54.549 "trtype": "TCP", 00:19:54.549 "adrfam": "IPv4", 00:19:54.549 "traddr": "10.0.0.2", 00:19:54.549 "trsvcid": "4420" 00:19:54.549 }, 00:19:54.549 "peer_address": { 00:19:54.549 "trtype": "TCP", 00:19:54.549 "adrfam": "IPv4", 00:19:54.549 "traddr": "10.0.0.1", 00:19:54.549 "trsvcid": "34280" 00:19:54.549 }, 00:19:54.549 "auth": { 00:19:54.549 "state": "completed", 00:19:54.549 "digest": "sha512", 00:19:54.549 "dhgroup": "null" 00:19:54.549 } 00:19:54.549 } 00:19:54.549 ]' 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.549 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.817 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.817 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.817 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.817 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.818 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.093 12:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.024 12:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.282 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.539 00:19:56.539 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.539 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.539 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.797 { 00:19:56.797 "cntlid": 101, 00:19:56.797 "qid": 0, 00:19:56.797 "state": "enabled", 00:19:56.797 "thread": "nvmf_tgt_poll_group_000", 00:19:56.797 "listen_address": { 00:19:56.797 "trtype": "TCP", 00:19:56.797 "adrfam": "IPv4", 00:19:56.797 "traddr": "10.0.0.2", 00:19:56.797 "trsvcid": "4420" 00:19:56.797 }, 00:19:56.797 "peer_address": { 00:19:56.797 "trtype": "TCP", 00:19:56.797 "adrfam": "IPv4", 00:19:56.797 "traddr": "10.0.0.1", 00:19:56.797 "trsvcid": "34310" 00:19:56.797 }, 00:19:56.797 "auth": { 00:19:56.797 "state": "completed", 00:19:56.797 "digest": "sha512", 00:19:56.797 "dhgroup": "null" 00:19:56.797 } 00:19:56.797 } 00:19:56.797 ]' 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.797 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.055 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:57.055 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.055 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.055 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.055 12:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.313 12:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:19:58.245 12:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:58.246 12:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.503 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.761 00:19:58.761 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.761 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.761 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.019 { 00:19:59.019 "cntlid": 103, 00:19:59.019 "qid": 0, 00:19:59.019 "state": "enabled", 00:19:59.019 "thread": "nvmf_tgt_poll_group_000", 00:19:59.019 "listen_address": { 00:19:59.019 "trtype": "TCP", 00:19:59.019 "adrfam": "IPv4", 00:19:59.019 "traddr": "10.0.0.2", 00:19:59.019 "trsvcid": "4420" 00:19:59.019 }, 00:19:59.019 "peer_address": { 00:19:59.019 "trtype": "TCP", 00:19:59.019 "adrfam": "IPv4", 00:19:59.019 "traddr": "10.0.0.1", 00:19:59.019 "trsvcid": "34342" 00:19:59.019 }, 00:19:59.019 "auth": { 00:19:59.019 "state": "completed", 00:19:59.019 "digest": "sha512", 00:19:59.019 "dhgroup": "null" 00:19:59.019 } 00:19:59.019 } 00:19:59.019 ]' 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.019 12:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.277 12:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:00.211 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.469 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.035 00:20:01.035 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.035 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.035 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.293 { 00:20:01.293 "cntlid": 105, 00:20:01.293 "qid": 0, 00:20:01.293 "state": "enabled", 00:20:01.293 "thread": "nvmf_tgt_poll_group_000", 00:20:01.293 "listen_address": { 00:20:01.293 "trtype": "TCP", 00:20:01.293 "adrfam": "IPv4", 00:20:01.293 "traddr": "10.0.0.2", 00:20:01.293 "trsvcid": "4420" 00:20:01.293 }, 00:20:01.293 "peer_address": { 00:20:01.293 "trtype": "TCP", 00:20:01.293 "adrfam": "IPv4", 00:20:01.293 "traddr": "10.0.0.1", 00:20:01.293 "trsvcid": "55866" 00:20:01.293 }, 00:20:01.293 "auth": { 00:20:01.293 "state": "completed", 00:20:01.293 "digest": "sha512", 00:20:01.293 "dhgroup": "ffdhe2048" 00:20:01.293 } 00:20:01.293 } 00:20:01.293 ]' 00:20:01.293 12:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.293 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.551 12:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:20:02.481 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.482 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.738 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.994 00:20:02.994 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.994 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.994 12:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.251 { 00:20:03.251 "cntlid": 107, 00:20:03.251 "qid": 0, 00:20:03.251 "state": "enabled", 00:20:03.251 "thread": "nvmf_tgt_poll_group_000", 00:20:03.251 "listen_address": { 00:20:03.251 "trtype": "TCP", 00:20:03.251 "adrfam": "IPv4", 00:20:03.251 "traddr": "10.0.0.2", 00:20:03.251 "trsvcid": "4420" 00:20:03.251 }, 00:20:03.251 "peer_address": { 00:20:03.251 "trtype": "TCP", 00:20:03.251 "adrfam": "IPv4", 00:20:03.251 "traddr": "10.0.0.1", 00:20:03.251 "trsvcid": "55902" 00:20:03.251 }, 00:20:03.251 "auth": { 00:20:03.251 "state": "completed", 00:20:03.251 "digest": "sha512", 00:20:03.251 "dhgroup": "ffdhe2048" 00:20:03.251 } 00:20:03.251 } 00:20:03.251 ]' 00:20:03.251 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.508 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.765 12:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.695 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.951 12:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.216 00:20:05.216 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.216 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.216 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.479 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.479 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.479 12:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.479 12:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.479 12:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.479 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.479 { 00:20:05.479 "cntlid": 109, 00:20:05.479 "qid": 0, 00:20:05.479 "state": "enabled", 00:20:05.479 "thread": "nvmf_tgt_poll_group_000", 00:20:05.479 "listen_address": { 00:20:05.479 "trtype": "TCP", 00:20:05.479 "adrfam": "IPv4", 00:20:05.479 "traddr": "10.0.0.2", 00:20:05.479 "trsvcid": "4420" 00:20:05.479 }, 00:20:05.480 "peer_address": { 00:20:05.480 "trtype": "TCP", 00:20:05.480 "adrfam": "IPv4", 00:20:05.480 "traddr": "10.0.0.1", 00:20:05.480 "trsvcid": "55936" 00:20:05.480 }, 00:20:05.480 "auth": { 00:20:05.480 "state": "completed", 00:20:05.480 "digest": "sha512", 00:20:05.480 "dhgroup": "ffdhe2048" 00:20:05.480 } 00:20:05.480 } 00:20:05.480 ]' 00:20:05.480 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.480 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.480 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.737 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.737 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.737 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.737 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.737 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.993 12:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:06.922 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.179 12:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.436 00:20:07.436 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.436 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.436 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.693 { 00:20:07.693 "cntlid": 111, 00:20:07.693 "qid": 0, 00:20:07.693 "state": "enabled", 00:20:07.693 "thread": "nvmf_tgt_poll_group_000", 00:20:07.693 "listen_address": { 00:20:07.693 "trtype": "TCP", 00:20:07.693 "adrfam": "IPv4", 00:20:07.693 "traddr": "10.0.0.2", 00:20:07.693 "trsvcid": "4420" 00:20:07.693 }, 00:20:07.693 "peer_address": { 00:20:07.693 "trtype": "TCP", 00:20:07.693 "adrfam": "IPv4", 00:20:07.693 "traddr": "10.0.0.1", 00:20:07.693 "trsvcid": "55946" 00:20:07.693 }, 00:20:07.693 "auth": { 00:20:07.693 "state": "completed", 00:20:07.693 "digest": "sha512", 00:20:07.693 "dhgroup": "ffdhe2048" 00:20:07.693 } 00:20:07.693 } 00:20:07.693 ]' 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.693 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.950 12:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:08.917 12:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.174 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.738 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.738 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.738 { 00:20:09.738 "cntlid": 113, 00:20:09.738 "qid": 0, 00:20:09.738 "state": "enabled", 00:20:09.738 "thread": "nvmf_tgt_poll_group_000", 00:20:09.738 "listen_address": { 00:20:09.738 "trtype": "TCP", 00:20:09.738 "adrfam": "IPv4", 00:20:09.738 "traddr": "10.0.0.2", 00:20:09.738 "trsvcid": "4420" 00:20:09.738 }, 00:20:09.738 "peer_address": { 00:20:09.738 "trtype": "TCP", 00:20:09.738 "adrfam": "IPv4", 00:20:09.738 "traddr": "10.0.0.1", 00:20:09.738 "trsvcid": "48688" 00:20:09.738 }, 00:20:09.738 "auth": { 00:20:09.738 "state": "completed", 00:20:09.738 "digest": "sha512", 00:20:09.738 "dhgroup": "ffdhe3072" 00:20:09.738 } 00:20:09.738 } 00:20:09.738 ]' 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.995 12:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.253 12:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:20:11.184 12:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.184 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.441 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:11.441 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.441 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.441 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.454 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.711 00:20:11.711 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.711 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.711 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.968 { 00:20:11.968 "cntlid": 115, 00:20:11.968 "qid": 0, 00:20:11.968 "state": "enabled", 00:20:11.968 "thread": "nvmf_tgt_poll_group_000", 00:20:11.968 "listen_address": { 00:20:11.968 "trtype": "TCP", 00:20:11.968 "adrfam": "IPv4", 00:20:11.968 "traddr": "10.0.0.2", 00:20:11.968 "trsvcid": "4420" 00:20:11.968 }, 00:20:11.968 "peer_address": { 00:20:11.968 "trtype": "TCP", 00:20:11.968 "adrfam": "IPv4", 00:20:11.968 "traddr": "10.0.0.1", 00:20:11.968 "trsvcid": "48706" 00:20:11.968 }, 00:20:11.968 "auth": { 00:20:11.968 "state": "completed", 00:20:11.968 "digest": "sha512", 00:20:11.968 "dhgroup": "ffdhe3072" 00:20:11.968 } 00:20:11.968 } 00:20:11.968 ]' 00:20:11.968 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.225 12:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.482 12:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.412 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.668 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.925 00:20:13.925 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.925 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.925 12:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.183 { 00:20:14.183 "cntlid": 117, 00:20:14.183 "qid": 0, 00:20:14.183 "state": "enabled", 00:20:14.183 "thread": "nvmf_tgt_poll_group_000", 00:20:14.183 "listen_address": { 00:20:14.183 "trtype": "TCP", 00:20:14.183 "adrfam": "IPv4", 00:20:14.183 "traddr": "10.0.0.2", 00:20:14.183 "trsvcid": "4420" 00:20:14.183 }, 00:20:14.183 "peer_address": { 00:20:14.183 "trtype": "TCP", 00:20:14.183 "adrfam": "IPv4", 00:20:14.183 "traddr": "10.0.0.1", 00:20:14.183 "trsvcid": "48730" 00:20:14.183 }, 00:20:14.183 "auth": { 00:20:14.183 "state": "completed", 00:20:14.183 "digest": "sha512", 00:20:14.183 "dhgroup": "ffdhe3072" 00:20:14.183 } 00:20:14.183 } 00:20:14.183 ]' 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.183 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.440 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.440 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.440 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.440 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.440 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.698 12:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.628 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.885 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.141 00:20:16.141 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.141 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.141 12:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.398 { 00:20:16.398 "cntlid": 119, 00:20:16.398 "qid": 0, 00:20:16.398 "state": "enabled", 00:20:16.398 "thread": "nvmf_tgt_poll_group_000", 00:20:16.398 "listen_address": { 00:20:16.398 "trtype": "TCP", 00:20:16.398 "adrfam": "IPv4", 00:20:16.398 "traddr": "10.0.0.2", 00:20:16.398 "trsvcid": "4420" 00:20:16.398 }, 00:20:16.398 "peer_address": { 00:20:16.398 "trtype": "TCP", 00:20:16.398 "adrfam": "IPv4", 00:20:16.398 "traddr": "10.0.0.1", 00:20:16.398 "trsvcid": "48770" 00:20:16.398 }, 00:20:16.398 "auth": { 00:20:16.398 "state": "completed", 00:20:16.398 "digest": "sha512", 00:20:16.398 "dhgroup": "ffdhe3072" 00:20:16.398 } 00:20:16.398 } 00:20:16.398 ]' 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.398 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.656 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.656 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.656 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.656 12:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:17.584 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.584 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.584 12:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.584 12:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.840 12:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.840 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.840 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.840 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.840 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.097 12:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.354 00:20:18.354 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.354 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.354 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.611 { 00:20:18.611 "cntlid": 121, 00:20:18.611 "qid": 0, 00:20:18.611 "state": "enabled", 00:20:18.611 "thread": "nvmf_tgt_poll_group_000", 00:20:18.611 "listen_address": { 00:20:18.611 "trtype": "TCP", 00:20:18.611 "adrfam": "IPv4", 00:20:18.611 "traddr": "10.0.0.2", 00:20:18.611 "trsvcid": "4420" 00:20:18.611 }, 00:20:18.611 "peer_address": { 00:20:18.611 "trtype": "TCP", 00:20:18.611 "adrfam": "IPv4", 00:20:18.611 "traddr": "10.0.0.1", 00:20:18.611 "trsvcid": "48804" 00:20:18.611 }, 00:20:18.611 "auth": { 00:20:18.611 "state": "completed", 00:20:18.611 "digest": "sha512", 00:20:18.611 "dhgroup": "ffdhe4096" 00:20:18.611 } 00:20:18.611 } 00:20:18.611 ]' 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.611 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.867 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.867 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.867 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.867 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.867 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.134 12:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.061 12:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.317 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.573 00:20:20.573 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.573 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.573 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.830 { 00:20:20.830 "cntlid": 123, 00:20:20.830 "qid": 0, 00:20:20.830 "state": "enabled", 00:20:20.830 "thread": "nvmf_tgt_poll_group_000", 00:20:20.830 "listen_address": { 00:20:20.830 "trtype": "TCP", 00:20:20.830 "adrfam": "IPv4", 00:20:20.830 "traddr": "10.0.0.2", 00:20:20.830 "trsvcid": "4420" 00:20:20.830 }, 00:20:20.830 "peer_address": { 00:20:20.830 "trtype": "TCP", 00:20:20.830 "adrfam": "IPv4", 00:20:20.830 "traddr": "10.0.0.1", 00:20:20.830 "trsvcid": "35244" 00:20:20.830 }, 00:20:20.830 "auth": { 00:20:20.830 "state": "completed", 00:20:20.830 "digest": "sha512", 00:20:20.830 "dhgroup": "ffdhe4096" 00:20:20.830 } 00:20:20.830 } 00:20:20.830 ]' 00:20:20.830 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.087 12:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.344 12:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.332 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.590 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.152 00:20:23.152 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.152 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.152 12:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.152 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.152 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.152 12:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.152 12:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.409 { 00:20:23.409 "cntlid": 125, 00:20:23.409 "qid": 0, 00:20:23.409 "state": "enabled", 00:20:23.409 "thread": "nvmf_tgt_poll_group_000", 00:20:23.409 "listen_address": { 00:20:23.409 "trtype": "TCP", 00:20:23.409 "adrfam": "IPv4", 00:20:23.409 "traddr": "10.0.0.2", 00:20:23.409 "trsvcid": "4420" 00:20:23.409 }, 00:20:23.409 "peer_address": { 00:20:23.409 "trtype": "TCP", 00:20:23.409 "adrfam": "IPv4", 00:20:23.409 "traddr": "10.0.0.1", 00:20:23.409 "trsvcid": "35258" 00:20:23.409 }, 00:20:23.409 "auth": { 00:20:23.409 "state": "completed", 00:20:23.409 "digest": "sha512", 00:20:23.409 "dhgroup": "ffdhe4096" 00:20:23.409 } 00:20:23.409 } 00:20:23.409 ]' 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.409 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.665 12:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:20:24.593 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.594 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.850 12:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.413 00:20:25.413 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.413 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.413 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.669 { 00:20:25.669 "cntlid": 127, 00:20:25.669 "qid": 0, 00:20:25.669 "state": "enabled", 00:20:25.669 "thread": "nvmf_tgt_poll_group_000", 00:20:25.669 "listen_address": { 00:20:25.669 "trtype": "TCP", 00:20:25.669 "adrfam": "IPv4", 00:20:25.669 "traddr": "10.0.0.2", 00:20:25.669 "trsvcid": "4420" 00:20:25.669 }, 00:20:25.669 "peer_address": { 00:20:25.669 "trtype": "TCP", 00:20:25.669 "adrfam": "IPv4", 00:20:25.669 "traddr": "10.0.0.1", 00:20:25.669 "trsvcid": "35290" 00:20:25.669 }, 00:20:25.669 "auth": { 00:20:25.669 "state": "completed", 00:20:25.669 "digest": "sha512", 00:20:25.669 "dhgroup": "ffdhe4096" 00:20:25.669 } 00:20:25.669 } 00:20:25.669 ]' 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.669 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.926 12:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:26.857 12:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.115 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.679 00:20:27.679 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.679 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.679 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.937 { 00:20:27.937 "cntlid": 129, 00:20:27.937 "qid": 0, 00:20:27.937 "state": "enabled", 00:20:27.937 "thread": "nvmf_tgt_poll_group_000", 00:20:27.937 "listen_address": { 00:20:27.937 "trtype": "TCP", 00:20:27.937 "adrfam": "IPv4", 00:20:27.937 "traddr": "10.0.0.2", 00:20:27.937 "trsvcid": "4420" 00:20:27.937 }, 00:20:27.937 "peer_address": { 00:20:27.937 "trtype": "TCP", 00:20:27.937 "adrfam": "IPv4", 00:20:27.937 "traddr": "10.0.0.1", 00:20:27.937 "trsvcid": "35308" 00:20:27.937 }, 00:20:27.937 "auth": { 00:20:27.937 "state": "completed", 00:20:27.937 "digest": "sha512", 00:20:27.937 "dhgroup": "ffdhe6144" 00:20:27.937 } 00:20:27.937 } 00:20:27.937 ]' 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.937 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.193 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.193 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.193 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.193 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.193 12:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.448 12:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.378 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.634 12:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.197 00:20:30.197 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.197 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.197 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.454 { 00:20:30.454 "cntlid": 131, 00:20:30.454 "qid": 0, 00:20:30.454 "state": "enabled", 00:20:30.454 "thread": "nvmf_tgt_poll_group_000", 00:20:30.454 "listen_address": { 00:20:30.454 "trtype": "TCP", 00:20:30.454 "adrfam": "IPv4", 00:20:30.454 "traddr": "10.0.0.2", 00:20:30.454 "trsvcid": "4420" 00:20:30.454 }, 00:20:30.454 "peer_address": { 00:20:30.454 "trtype": "TCP", 00:20:30.454 "adrfam": "IPv4", 00:20:30.454 "traddr": "10.0.0.1", 00:20:30.454 "trsvcid": "46680" 00:20:30.454 }, 00:20:30.454 "auth": { 00:20:30.454 "state": "completed", 00:20:30.454 "digest": "sha512", 00:20:30.454 "dhgroup": "ffdhe6144" 00:20:30.454 } 00:20:30.454 } 00:20:30.454 ]' 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.454 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.710 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.710 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.710 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.710 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.710 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.966 12:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.901 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.158 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:32.158 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.158 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.158 12:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.158 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.724 00:20:32.724 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.724 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.724 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.981 { 00:20:32.981 "cntlid": 133, 00:20:32.981 "qid": 0, 00:20:32.981 "state": "enabled", 00:20:32.981 "thread": "nvmf_tgt_poll_group_000", 00:20:32.981 "listen_address": { 00:20:32.981 "trtype": "TCP", 00:20:32.981 "adrfam": "IPv4", 00:20:32.981 "traddr": "10.0.0.2", 00:20:32.981 "trsvcid": "4420" 00:20:32.981 }, 00:20:32.981 "peer_address": { 00:20:32.981 "trtype": "TCP", 00:20:32.981 "adrfam": "IPv4", 00:20:32.981 "traddr": "10.0.0.1", 00:20:32.981 "trsvcid": "46716" 00:20:32.981 }, 00:20:32.981 "auth": { 00:20:32.981 "state": "completed", 00:20:32.981 "digest": "sha512", 00:20:32.981 "dhgroup": "ffdhe6144" 00:20:32.981 } 00:20:32.981 } 00:20:32.981 ]' 00:20:32.981 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.238 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.238 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.238 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.238 12:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.238 12:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.238 12:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.238 12:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.512 12:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:34.446 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.703 12:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.266 00:20:35.267 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.267 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.267 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.523 { 00:20:35.523 "cntlid": 135, 00:20:35.523 "qid": 0, 00:20:35.523 "state": "enabled", 00:20:35.523 "thread": "nvmf_tgt_poll_group_000", 00:20:35.523 "listen_address": { 00:20:35.523 "trtype": "TCP", 00:20:35.523 "adrfam": "IPv4", 00:20:35.523 "traddr": "10.0.0.2", 00:20:35.523 "trsvcid": "4420" 00:20:35.523 }, 00:20:35.523 "peer_address": { 00:20:35.523 "trtype": "TCP", 00:20:35.523 "adrfam": "IPv4", 00:20:35.523 "traddr": "10.0.0.1", 00:20:35.523 "trsvcid": "46744" 00:20:35.523 }, 00:20:35.523 "auth": { 00:20:35.523 "state": "completed", 00:20:35.523 "digest": "sha512", 00:20:35.523 "dhgroup": "ffdhe6144" 00:20:35.523 } 00:20:35.523 } 00:20:35.523 ]' 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.523 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.781 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.781 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.781 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.057 12:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:37.024 12:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.281 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.217 00:20:38.217 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.217 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.217 12:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.474 { 00:20:38.474 "cntlid": 137, 00:20:38.474 "qid": 0, 00:20:38.474 "state": "enabled", 00:20:38.474 "thread": "nvmf_tgt_poll_group_000", 00:20:38.474 "listen_address": { 00:20:38.474 "trtype": "TCP", 00:20:38.474 "adrfam": "IPv4", 00:20:38.474 "traddr": "10.0.0.2", 00:20:38.474 "trsvcid": "4420" 00:20:38.474 }, 00:20:38.474 "peer_address": { 00:20:38.474 "trtype": "TCP", 00:20:38.474 "adrfam": "IPv4", 00:20:38.474 "traddr": "10.0.0.1", 00:20:38.474 "trsvcid": "46786" 00:20:38.474 }, 00:20:38.474 "auth": { 00:20:38.474 "state": "completed", 00:20:38.474 "digest": "sha512", 00:20:38.474 "dhgroup": "ffdhe8192" 00:20:38.474 } 00:20:38.474 } 00:20:38.474 ]' 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.474 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.733 12:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.108 12:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.046 00:20:41.046 12:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.046 12:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.046 12:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.303 { 00:20:41.303 "cntlid": 139, 00:20:41.303 "qid": 0, 00:20:41.303 "state": "enabled", 00:20:41.303 "thread": "nvmf_tgt_poll_group_000", 00:20:41.303 "listen_address": { 00:20:41.303 "trtype": "TCP", 00:20:41.303 "adrfam": "IPv4", 00:20:41.303 "traddr": "10.0.0.2", 00:20:41.303 "trsvcid": "4420" 00:20:41.303 }, 00:20:41.303 "peer_address": { 00:20:41.303 "trtype": "TCP", 00:20:41.303 "adrfam": "IPv4", 00:20:41.303 "traddr": "10.0.0.1", 00:20:41.303 "trsvcid": "53762" 00:20:41.303 }, 00:20:41.303 "auth": { 00:20:41.303 "state": "completed", 00:20:41.303 "digest": "sha512", 00:20:41.303 "dhgroup": "ffdhe8192" 00:20:41.303 } 00:20:41.303 } 00:20:41.303 ]' 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.303 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.561 12:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzYyZWFkNGVmMThjMGU1NzQ2ZTEyMWM4NTYyYjg4MzXoYLK1: --dhchap-ctrl-secret DHHC-1:02:ZjlkNzFkOGFmN2IzM2IxZWU1MmY5ZjU1YmIwNDM3ZmY3Nzc0MTk3NjIwZWQ3NDZm0KrDKg==: 00:20:42.495 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.752 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.009 12:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.947 00:20:43.947 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.947 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.947 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.225 { 00:20:44.225 "cntlid": 141, 00:20:44.225 "qid": 0, 00:20:44.225 "state": "enabled", 00:20:44.225 "thread": "nvmf_tgt_poll_group_000", 00:20:44.225 "listen_address": { 00:20:44.225 "trtype": "TCP", 00:20:44.225 "adrfam": "IPv4", 00:20:44.225 "traddr": "10.0.0.2", 00:20:44.225 "trsvcid": "4420" 00:20:44.225 }, 00:20:44.225 "peer_address": { 00:20:44.225 "trtype": "TCP", 00:20:44.225 "adrfam": "IPv4", 00:20:44.225 "traddr": "10.0.0.1", 00:20:44.225 "trsvcid": "53800" 00:20:44.225 }, 00:20:44.225 "auth": { 00:20:44.225 "state": "completed", 00:20:44.225 "digest": "sha512", 00:20:44.225 "dhgroup": "ffdhe8192" 00:20:44.225 } 00:20:44.225 } 00:20:44.225 ]' 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.225 12:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.225 12:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.225 12:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.225 12:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.482 12:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YWEwNjhlM2FlYzBjYzQ0NmM2M2I5MzY5NDJkYWM2YzAzMWM4MjA5MmM4ZTNiOWRjx2ijjg==: --dhchap-ctrl-secret DHHC-1:01:MTU2NjQwZDg5Mzc2MzJjZjJiNjU4OTQ3NGJiMjA5MzipwDSZ: 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.415 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:45.671 12:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.606 00:20:46.606 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.606 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.606 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.864 { 00:20:46.864 "cntlid": 143, 00:20:46.864 "qid": 0, 00:20:46.864 "state": "enabled", 00:20:46.864 "thread": "nvmf_tgt_poll_group_000", 00:20:46.864 "listen_address": { 00:20:46.864 "trtype": "TCP", 00:20:46.864 "adrfam": "IPv4", 00:20:46.864 "traddr": "10.0.0.2", 00:20:46.864 "trsvcid": "4420" 00:20:46.864 }, 00:20:46.864 "peer_address": { 00:20:46.864 "trtype": "TCP", 00:20:46.864 "adrfam": "IPv4", 00:20:46.864 "traddr": "10.0.0.1", 00:20:46.864 "trsvcid": "53828" 00:20:46.864 }, 00:20:46.864 "auth": { 00:20:46.864 "state": "completed", 00:20:46.864 "digest": "sha512", 00:20:46.864 "dhgroup": "ffdhe8192" 00:20:46.864 } 00:20:46.864 } 00:20:46.864 ]' 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.864 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.121 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.121 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.121 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.121 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.121 12:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.378 12:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:48.311 12:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.311 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.569 12:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.504 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.504 { 00:20:49.504 "cntlid": 145, 00:20:49.504 "qid": 0, 00:20:49.504 "state": "enabled", 00:20:49.504 "thread": "nvmf_tgt_poll_group_000", 00:20:49.504 "listen_address": { 00:20:49.504 "trtype": "TCP", 00:20:49.504 "adrfam": "IPv4", 00:20:49.504 "traddr": "10.0.0.2", 00:20:49.504 "trsvcid": "4420" 00:20:49.504 }, 00:20:49.504 "peer_address": { 00:20:49.504 "trtype": "TCP", 00:20:49.504 "adrfam": "IPv4", 00:20:49.504 "traddr": "10.0.0.1", 00:20:49.504 "trsvcid": "53872" 00:20:49.504 }, 00:20:49.504 "auth": { 00:20:49.504 "state": "completed", 00:20:49.504 "digest": "sha512", 00:20:49.504 "dhgroup": "ffdhe8192" 00:20:49.504 } 00:20:49.504 } 00:20:49.504 ]' 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.504 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.761 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.761 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.761 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.761 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.762 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.019 12:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWE0YmZlYzI4YTQ4ZjAyNTE1MmNlOGJkNjk3MzRlOWYyOGRiYWJkODQ4MDAzMTJhPmXTfA==: --dhchap-ctrl-secret DHHC-1:03:ZDM0Yjg1MDc4Y2M2NDhlYTI0MDU1NWE1NTRkNzBkODY5NTZjN2EwMTgzMjZlYjgwMDg5OTFhZTNhZWMyMzY4Y3ysxq0=: 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.992 12:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:51.931 request: 00:20:51.931 { 00:20:51.931 "name": "nvme0", 00:20:51.931 "trtype": "tcp", 00:20:51.931 "traddr": "10.0.0.2", 00:20:51.931 "adrfam": "ipv4", 00:20:51.931 "trsvcid": "4420", 00:20:51.931 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.931 "prchk_reftag": false, 00:20:51.931 "prchk_guard": false, 00:20:51.931 "hdgst": false, 00:20:51.931 "ddgst": false, 00:20:51.931 "dhchap_key": "key2", 00:20:51.931 "method": "bdev_nvme_attach_controller", 00:20:51.931 "req_id": 1 00:20:51.931 } 00:20:51.931 Got JSON-RPC error response 00:20:51.931 response: 00:20:51.931 { 00:20:51.931 "code": -5, 00:20:51.931 "message": "Input/output error" 00:20:51.931 } 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.931 12:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.866 request: 00:20:52.866 { 00:20:52.866 "name": "nvme0", 00:20:52.866 "trtype": "tcp", 00:20:52.866 "traddr": "10.0.0.2", 00:20:52.866 "adrfam": "ipv4", 00:20:52.866 "trsvcid": "4420", 00:20:52.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:52.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.866 "prchk_reftag": false, 00:20:52.866 "prchk_guard": false, 00:20:52.866 "hdgst": false, 00:20:52.866 "ddgst": false, 00:20:52.866 "dhchap_key": "key1", 00:20:52.866 "dhchap_ctrlr_key": "ckey2", 00:20:52.866 "method": "bdev_nvme_attach_controller", 00:20:52.866 "req_id": 1 00:20:52.866 } 00:20:52.866 Got JSON-RPC error response 00:20:52.866 response: 00:20:52.867 { 00:20:52.867 "code": -5, 00:20:52.867 "message": "Input/output error" 00:20:52.867 } 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.867 12:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.433 request: 00:20:53.433 { 00:20:53.433 "name": "nvme0", 00:20:53.433 "trtype": "tcp", 00:20:53.433 "traddr": "10.0.0.2", 00:20:53.433 "adrfam": "ipv4", 00:20:53.433 "trsvcid": "4420", 00:20:53.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.433 "prchk_reftag": false, 00:20:53.433 "prchk_guard": false, 00:20:53.433 "hdgst": false, 00:20:53.433 "ddgst": false, 00:20:53.433 "dhchap_key": "key1", 00:20:53.433 "dhchap_ctrlr_key": "ckey1", 00:20:53.433 "method": "bdev_nvme_attach_controller", 00:20:53.433 "req_id": 1 00:20:53.433 } 00:20:53.433 Got JSON-RPC error response 00:20:53.433 response: 00:20:53.433 { 00:20:53.433 "code": -5, 00:20:53.433 "message": "Input/output error" 00:20:53.433 } 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.433 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 997203 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 997203 ']' 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 997203 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 997203 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 997203' 00:20:53.691 killing process with pid 997203 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 997203 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 997203 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.691 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1019699 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1019699 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1019699 ']' 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.948 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1019699 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1019699 ']' 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.204 12:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.461 12:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.395 00:20:55.395 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.395 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.395 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.651 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.651 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.651 12:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.651 12:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.651 12:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.651 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.651 { 00:20:55.651 "cntlid": 1, 00:20:55.651 "qid": 0, 00:20:55.651 "state": "enabled", 00:20:55.651 "thread": "nvmf_tgt_poll_group_000", 00:20:55.651 "listen_address": { 00:20:55.651 "trtype": "TCP", 00:20:55.651 "adrfam": "IPv4", 00:20:55.651 "traddr": "10.0.0.2", 00:20:55.651 "trsvcid": "4420" 00:20:55.651 }, 00:20:55.651 "peer_address": { 00:20:55.651 "trtype": "TCP", 00:20:55.651 "adrfam": "IPv4", 00:20:55.651 "traddr": "10.0.0.1", 00:20:55.651 "trsvcid": "54106" 00:20:55.651 }, 00:20:55.651 "auth": { 00:20:55.651 "state": "completed", 00:20:55.651 "digest": "sha512", 00:20:55.651 "dhgroup": "ffdhe8192" 00:20:55.651 } 00:20:55.651 } 00:20:55.651 ]' 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.652 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.908 12:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NjQ1OTdmMzA1NTk5MWQ4MjM2MTA5NTFlMjM5YjM4NmEyNTM3ZGJlMTkwYjZkZWM1NTQzNDUxY2Y3ZGQ2YTY1MY8d/uA=: 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:56.838 12:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.095 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.353 request: 00:20:57.353 { 00:20:57.353 "name": "nvme0", 00:20:57.353 "trtype": "tcp", 00:20:57.353 "traddr": "10.0.0.2", 00:20:57.353 "adrfam": "ipv4", 00:20:57.353 "trsvcid": "4420", 00:20:57.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:57.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.353 "prchk_reftag": false, 00:20:57.353 "prchk_guard": false, 00:20:57.353 "hdgst": false, 00:20:57.353 "ddgst": false, 00:20:57.353 "dhchap_key": "key3", 00:20:57.353 "method": "bdev_nvme_attach_controller", 00:20:57.353 "req_id": 1 00:20:57.353 } 00:20:57.353 Got JSON-RPC error response 00:20:57.353 response: 00:20:57.353 { 00:20:57.353 "code": -5, 00:20:57.353 "message": "Input/output error" 00:20:57.353 } 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:57.353 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.610 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.869 request: 00:20:57.869 { 00:20:57.869 "name": "nvme0", 00:20:57.869 "trtype": "tcp", 00:20:57.869 "traddr": "10.0.0.2", 00:20:57.869 "adrfam": "ipv4", 00:20:57.869 "trsvcid": "4420", 00:20:57.869 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:57.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.869 "prchk_reftag": false, 00:20:57.869 "prchk_guard": false, 00:20:57.869 "hdgst": false, 00:20:57.869 "ddgst": false, 00:20:57.869 "dhchap_key": "key3", 00:20:57.869 "method": "bdev_nvme_attach_controller", 00:20:57.869 "req_id": 1 00:20:57.869 } 00:20:57.869 Got JSON-RPC error response 00:20:57.869 response: 00:20:57.869 { 00:20:57.869 "code": -5, 00:20:57.869 "message": "Input/output error" 00:20:57.869 } 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:57.869 12:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.126 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.383 request: 00:20:58.383 { 00:20:58.383 "name": "nvme0", 00:20:58.383 "trtype": "tcp", 00:20:58.383 "traddr": "10.0.0.2", 00:20:58.383 "adrfam": "ipv4", 00:20:58.383 "trsvcid": "4420", 00:20:58.383 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.383 "prchk_reftag": false, 00:20:58.383 "prchk_guard": false, 00:20:58.383 "hdgst": false, 00:20:58.383 "ddgst": false, 00:20:58.383 "dhchap_key": "key0", 00:20:58.383 "dhchap_ctrlr_key": "key1", 00:20:58.383 "method": "bdev_nvme_attach_controller", 00:20:58.383 "req_id": 1 00:20:58.383 } 00:20:58.383 Got JSON-RPC error response 00:20:58.383 response: 00:20:58.383 { 00:20:58.383 "code": -5, 00:20:58.383 "message": "Input/output error" 00:20:58.383 } 00:20:58.383 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:58.383 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.383 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.383 12:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.383 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:58.383 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:58.948 00:20:58.948 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:58.948 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:58.948 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.948 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.948 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.948 12:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 997222 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 997222 ']' 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 997222 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 997222 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 997222' 00:20:59.205 killing process with pid 997222 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 997222 00:20:59.205 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 997222 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.772 rmmod nvme_tcp 00:20:59.772 rmmod nvme_fabrics 00:20:59.772 rmmod nvme_keyring 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1019699 ']' 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1019699 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1019699 ']' 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1019699 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019699 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019699' 00:20:59.772 killing process with pid 1019699 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1019699 00:20:59.772 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1019699 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.032 12:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.929 12:16:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.929 12:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ivo /tmp/spdk.key-sha256.iSh /tmp/spdk.key-sha384.H07 /tmp/spdk.key-sha512.ozR /tmp/spdk.key-sha512.VTX /tmp/spdk.key-sha384.Tr6 /tmp/spdk.key-sha256.n9j '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:02.185 00:21:02.185 real 3m8.471s 00:21:02.185 user 7m18.344s 00:21:02.185 sys 0m24.809s 00:21:02.185 12:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:02.185 12:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.185 ************************************ 00:21:02.185 END TEST nvmf_auth_target 00:21:02.185 ************************************ 00:21:02.185 12:16:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:02.185 12:16:09 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:02.185 12:16:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:02.185 12:16:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:02.185 12:16:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.185 12:16:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.185 ************************************ 00:21:02.186 START TEST nvmf_bdevio_no_huge 00:21:02.186 ************************************ 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:02.186 * Looking for test storage... 00:21:02.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.186 12:16:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:04.128 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:04.128 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:04.128 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:04.129 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:04.129 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.129 12:16:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.129 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.129 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.129 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:04.129 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.386 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.386 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.386 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:04.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:21:04.386 00:21:04.386 --- 10.0.0.2 ping statistics --- 00:21:04.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.386 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:04.387 00:21:04.387 --- 10.0.0.1 ping statistics --- 00:21:04.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.387 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1022341 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1022341 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1022341 ']' 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.387 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.387 [2024-07-22 12:16:12.160171] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:04.387 [2024-07-22 12:16:12.160260] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:04.387 [2024-07-22 12:16:12.208623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:04.387 [2024-07-22 12:16:12.230403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.645 [2024-07-22 12:16:12.320877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.645 [2024-07-22 12:16:12.320952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.645 [2024-07-22 12:16:12.320969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.645 [2024-07-22 12:16:12.320983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.645 [2024-07-22 12:16:12.320994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.645 [2024-07-22 12:16:12.321078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.645 [2024-07-22 12:16:12.321137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:04.645 [2024-07-22 12:16:12.321467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:04.645 [2024-07-22 12:16:12.321471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.645 [2024-07-22 12:16:12.436485] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.645 Malloc0 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.645 [2024-07-22 12:16:12.474270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.645 { 00:21:04.645 "params": { 00:21:04.645 "name": "Nvme$subsystem", 00:21:04.645 "trtype": "$TEST_TRANSPORT", 00:21:04.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.645 "adrfam": "ipv4", 00:21:04.645 "trsvcid": "$NVMF_PORT", 00:21:04.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.645 "hdgst": ${hdgst:-false}, 00:21:04.645 "ddgst": ${ddgst:-false} 00:21:04.645 }, 00:21:04.645 "method": "bdev_nvme_attach_controller" 00:21:04.645 } 00:21:04.645 EOF 00:21:04.645 )") 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:04.645 12:16:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:04.645 "params": { 00:21:04.645 "name": "Nvme1", 00:21:04.645 "trtype": "tcp", 00:21:04.645 "traddr": "10.0.0.2", 00:21:04.645 "adrfam": "ipv4", 00:21:04.645 "trsvcid": "4420", 00:21:04.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.645 "hdgst": false, 00:21:04.645 "ddgst": false 00:21:04.645 }, 00:21:04.645 "method": "bdev_nvme_attach_controller" 00:21:04.645 }' 00:21:04.645 [2024-07-22 12:16:12.516838] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:04.645 [2024-07-22 12:16:12.516941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1022375 ] 00:21:04.645 [2024-07-22 12:16:12.558323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:04.903 [2024-07-22 12:16:12.577971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:04.903 [2024-07-22 12:16:12.660432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.903 [2024-07-22 12:16:12.660481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.903 [2024-07-22 12:16:12.660484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.212 I/O targets: 00:21:05.212 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:05.212 00:21:05.212 00:21:05.212 CUnit - A unit testing framework for C - Version 2.1-3 00:21:05.212 http://cunit.sourceforge.net/ 00:21:05.212 00:21:05.212 00:21:05.212 Suite: bdevio tests on: Nvme1n1 00:21:05.212 Test: blockdev write read block ...passed 00:21:05.212 Test: blockdev write zeroes read block ...passed 00:21:05.212 Test: blockdev write zeroes read no split ...passed 00:21:05.501 Test: blockdev write zeroes read split ...passed 00:21:05.501 Test: blockdev write zeroes read split partial ...passed 00:21:05.501 Test: blockdev reset ...[2024-07-22 12:16:13.191096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:05.501 [2024-07-22 12:16:13.191216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4330 (9): Bad file descriptor 00:21:05.501 [2024-07-22 12:16:13.209232] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:05.501 passed 00:21:05.501 Test: blockdev write read 8 blocks ...passed 00:21:05.501 Test: blockdev write read size > 128k ...passed 00:21:05.501 Test: blockdev write read invalid size ...passed 00:21:05.501 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:05.501 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:05.501 Test: blockdev write read max offset ...passed 00:21:05.501 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:05.501 Test: blockdev writev readv 8 blocks ...passed 00:21:05.501 Test: blockdev writev readv 30 x 1block ...passed 00:21:05.501 Test: blockdev writev readv block ...passed 00:21:05.759 Test: blockdev writev readv size > 128k ...passed 00:21:05.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:05.760 Test: blockdev comparev and writev ...[2024-07-22 12:16:13.462393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.462430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.462460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.462477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.462892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.462917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.462938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.462955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.463375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.463398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.463418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.463434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.463804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.463828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.463849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:05.760 [2024-07-22 12:16:13.463864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:05.760 passed 00:21:05.760 Test: blockdev nvme passthru rw ...passed 00:21:05.760 Test: blockdev nvme passthru vendor specific ...[2024-07-22 12:16:13.545940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.760 [2024-07-22 12:16:13.545967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.546140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.760 [2024-07-22 12:16:13.546162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.546334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.760 [2024-07-22 12:16:13.546355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:05.760 [2024-07-22 12:16:13.546535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.760 [2024-07-22 12:16:13.546558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:05.760 passed 00:21:05.760 Test: blockdev nvme admin passthru ...passed 00:21:05.760 Test: blockdev copy ...passed 00:21:05.760 00:21:05.760 Run Summary: Type Total Ran Passed Failed Inactive 00:21:05.760 suites 1 1 n/a 0 0 00:21:05.760 tests 23 23 23 0 0 00:21:05.760 asserts 152 152 152 0 n/a 00:21:05.760 00:21:05.760 Elapsed time = 1.249 seconds 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.018 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.276 rmmod nvme_tcp 00:21:06.276 rmmod nvme_fabrics 00:21:06.276 rmmod nvme_keyring 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1022341 ']' 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1022341 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1022341 ']' 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1022341 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.276 12:16:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1022341 00:21:06.276 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:06.276 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:06.276 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1022341' 00:21:06.276 killing process with pid 1022341 00:21:06.276 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1022341 00:21:06.276 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1022341 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.534 12:16:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.107 12:16:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.107 00:21:09.107 real 0m6.514s 00:21:09.107 user 0m11.211s 00:21:09.107 sys 0m2.463s 00:21:09.107 12:16:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.107 12:16:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.107 ************************************ 00:21:09.107 END TEST nvmf_bdevio_no_huge 00:21:09.107 ************************************ 00:21:09.107 12:16:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:09.107 12:16:16 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:09.107 12:16:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:09.107 12:16:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.107 12:16:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:09.107 ************************************ 00:21:09.107 START TEST nvmf_tls 00:21:09.107 ************************************ 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:09.107 * Looking for test storage... 00:21:09.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:09.107 12:16:16 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:09.108 12:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:11.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:11.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:11.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:11.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:21:11.014 00:21:11.014 --- 10.0.0.2 ping statistics --- 00:21:11.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.014 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:21:11.014 00:21:11.014 --- 10.0.0.1 ping statistics --- 00:21:11.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.014 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.014 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1024561 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1024561 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1024561 ']' 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.015 [2024-07-22 12:16:18.682870] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:11.015 [2024-07-22 12:16:18.682975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.015 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.015 [2024-07-22 12:16:18.722556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:11.015 [2024-07-22 12:16:18.749949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.015 [2024-07-22 12:16:18.837252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.015 [2024-07-22 12:16:18.837319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.015 [2024-07-22 12:16:18.837333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.015 [2024-07-22 12:16:18.837344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.015 [2024-07-22 12:16:18.837353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.015 [2024-07-22 12:16:18.837380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:11.015 12:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:11.274 true 00:21:11.535 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:11.535 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:11.535 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:11.535 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:11.535 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:11.794 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:11.794 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:12.051 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:12.052 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:12.052 12:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:12.309 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.309 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:12.568 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:12.568 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:12.568 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.568 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:12.827 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:12.827 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:12.827 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:13.085 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.085 12:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:13.344 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:13.344 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:13.344 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:13.603 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.603 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:13.861 12:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.uQSNRIhx93 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.tKGWx3VdOD 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uQSNRIhx93 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tKGWx3VdOD 00:21:14.119 12:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:14.376 12:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:14.634 12:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uQSNRIhx93 00:21:14.634 12:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uQSNRIhx93 00:21:14.634 12:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:14.892 [2024-07-22 12:16:22.726811] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.892 12:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:15.149 12:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:15.406 [2024-07-22 12:16:23.292342] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.406 [2024-07-22 12:16:23.292586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.406 12:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:15.663 malloc0 00:21:15.663 12:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:15.921 12:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uQSNRIhx93 00:21:16.178 [2024-07-22 12:16:24.069586] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.178 12:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uQSNRIhx93 00:21:16.435 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.468 Initializing NVMe Controllers 00:21:26.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.468 Initialization complete. Launching workers. 00:21:26.468 ======================================================== 00:21:26.468 Latency(us) 00:21:26.468 Device Information : IOPS MiB/s Average min max 00:21:26.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7519.20 29.37 8514.19 1363.95 9173.87 00:21:26.468 ======================================================== 00:21:26.468 Total : 7519.20 29.37 8514.19 1363.95 9173.87 00:21:26.468 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQSNRIhx93 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uQSNRIhx93' 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1026332 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1026332 /var/tmp/bdevperf.sock 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1026332 ']' 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.468 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.468 [2024-07-22 12:16:34.239202] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:26.468 [2024-07-22 12:16:34.239283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026332 ] 00:21:26.468 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.468 [2024-07-22 12:16:34.269802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:26.468 [2024-07-22 12:16:34.297318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.747 [2024-07-22 12:16:34.384802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.747 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.747 12:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.747 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uQSNRIhx93 00:21:27.005 [2024-07-22 12:16:34.717799] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.005 [2024-07-22 12:16:34.717959] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.005 TLSTESTn1 00:21:27.005 12:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:27.005 Running I/O for 10 seconds... 00:21:39.214 00:21:39.214 Latency(us) 00:21:39.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.214 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.214 Verification LBA range: start 0x0 length 0x2000 00:21:39.214 TLSTESTn1 : 10.03 3451.43 13.48 0.00 0.00 37000.13 9903.22 50486.99 00:21:39.214 =================================================================================================================== 00:21:39.214 Total : 3451.43 13.48 0.00 0.00 37000.13 9903.22 50486.99 00:21:39.214 0 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1026332 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1026332 ']' 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1026332 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.214 12:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1026332 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1026332' 00:21:39.214 killing process with pid 1026332 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1026332 00:21:39.214 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.214 00:21:39.214 Latency(us) 00:21:39.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.214 =================================================================================================================== 00:21:39.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.214 [2024-07-22 12:16:45.019401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1026332 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tKGWx3VdOD 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tKGWx3VdOD 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tKGWx3VdOD 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tKGWx3VdOD' 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1027645 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1027645 /var/tmp/bdevperf.sock 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027645 ']' 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.214 [2024-07-22 12:16:45.288785] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:39.214 [2024-07-22 12:16:45.288872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027645 ] 00:21:39.214 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.214 [2024-07-22 12:16:45.321828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.214 [2024-07-22 12:16:45.351044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.214 [2024-07-22 12:16:45.444198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.214 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tKGWx3VdOD 00:21:39.214 [2024-07-22 12:16:45.830309] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.214 [2024-07-22 12:16:45.830436] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.215 [2024-07-22 12:16:45.842269] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.215 [2024-07-22 12:16:45.842354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb98d0 (107): Transport endpoint is not connected 00:21:39.215 [2024-07-22 12:16:45.843344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb98d0 (9): Bad file descriptor 00:21:39.215 [2024-07-22 12:16:45.844343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.215 [2024-07-22 12:16:45.844364] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.215 [2024-07-22 12:16:45.844381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.215 request: 00:21:39.215 { 00:21:39.215 "name": "TLSTEST", 00:21:39.215 "trtype": "tcp", 00:21:39.215 "traddr": "10.0.0.2", 00:21:39.215 "adrfam": "ipv4", 00:21:39.215 "trsvcid": "4420", 00:21:39.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.215 "prchk_reftag": false, 00:21:39.215 "prchk_guard": false, 00:21:39.215 "hdgst": false, 00:21:39.215 "ddgst": false, 00:21:39.215 "psk": "/tmp/tmp.tKGWx3VdOD", 00:21:39.215 "method": "bdev_nvme_attach_controller", 00:21:39.215 "req_id": 1 00:21:39.215 } 00:21:39.215 Got JSON-RPC error response 00:21:39.215 response: 00:21:39.215 { 00:21:39.215 "code": -5, 00:21:39.215 "message": "Input/output error" 00:21:39.215 } 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1027645 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027645 ']' 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027645 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027645 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027645' 00:21:39.215 killing process with pid 1027645 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027645 00:21:39.215 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.215 00:21:39.215 Latency(us) 00:21:39.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.215 =================================================================================================================== 00:21:39.215 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.215 [2024-07-22 12:16:45.892687] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.215 12:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027645 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uQSNRIhx93 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uQSNRIhx93 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uQSNRIhx93 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uQSNRIhx93' 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1027779 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1027779 /var/tmp/bdevperf.sock 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027779 ']' 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.215 [2024-07-22 12:16:46.151999] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:39.215 [2024-07-22 12:16:46.152091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027779 ] 00:21:39.215 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.215 [2024-07-22 12:16:46.183177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.215 [2024-07-22 12:16:46.209951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.215 [2024-07-22 12:16:46.290434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uQSNRIhx93 00:21:39.215 [2024-07-22 12:16:46.613308] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.215 [2024-07-22 12:16:46.613437] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.215 [2024-07-22 12:16:46.624176] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:39.215 [2024-07-22 12:16:46.624217] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:39.215 [2024-07-22 12:16:46.624257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.215 [2024-07-22 12:16:46.625331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba8d0 (107): Transport endpoint is not connected 00:21:39.215 [2024-07-22 12:16:46.626322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba8d0 (9): Bad file descriptor 00:21:39.215 [2024-07-22 12:16:46.627322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.215 [2024-07-22 12:16:46.627342] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.215 [2024-07-22 12:16:46.627358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.215 request: 00:21:39.215 { 00:21:39.215 "name": "TLSTEST", 00:21:39.215 "trtype": "tcp", 00:21:39.215 "traddr": "10.0.0.2", 00:21:39.215 "adrfam": "ipv4", 00:21:39.215 "trsvcid": "4420", 00:21:39.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.215 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:39.215 "prchk_reftag": false, 00:21:39.215 "prchk_guard": false, 00:21:39.215 "hdgst": false, 00:21:39.215 "ddgst": false, 00:21:39.215 "psk": "/tmp/tmp.uQSNRIhx93", 00:21:39.215 "method": "bdev_nvme_attach_controller", 00:21:39.215 "req_id": 1 00:21:39.215 } 00:21:39.215 Got JSON-RPC error response 00:21:39.215 response: 00:21:39.215 { 00:21:39.215 "code": -5, 00:21:39.215 "message": "Input/output error" 00:21:39.215 } 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1027779 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027779 ']' 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027779 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027779 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027779' 00:21:39.215 killing process with pid 1027779 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027779 00:21:39.215 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.215 00:21:39.215 Latency(us) 00:21:39.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.215 =================================================================================================================== 00:21:39.215 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.215 [2024-07-22 12:16:46.677946] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027779 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.215 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQSNRIhx93 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQSNRIhx93 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uQSNRIhx93 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uQSNRIhx93' 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1027800 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1027800 /var/tmp/bdevperf.sock 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027800 ']' 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.216 12:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.216 [2024-07-22 12:16:46.941699] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:39.216 [2024-07-22 12:16:46.941783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027800 ] 00:21:39.216 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.216 [2024-07-22 12:16:46.973658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.216 [2024-07-22 12:16:47.003057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.216 [2024-07-22 12:16:47.092561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.473 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.473 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.473 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uQSNRIhx93 00:21:39.730 [2024-07-22 12:16:47.417348] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.730 [2024-07-22 12:16:47.417478] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.730 [2024-07-22 12:16:47.426203] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:39.730 [2024-07-22 12:16:47.426234] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:39.730 [2024-07-22 12:16:47.426297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.730 [2024-07-22 12:16:47.427338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188b8d0 (107): Transport endpoint is not connected 00:21:39.730 [2024-07-22 12:16:47.428331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188b8d0 (9): Bad file descriptor 00:21:39.730 [2024-07-22 12:16:47.429330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:39.730 [2024-07-22 12:16:47.429355] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.730 [2024-07-22 12:16:47.429381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:39.730 request: 00:21:39.730 { 00:21:39.730 "name": "TLSTEST", 00:21:39.730 "trtype": "tcp", 00:21:39.730 "traddr": "10.0.0.2", 00:21:39.730 "adrfam": "ipv4", 00:21:39.730 "trsvcid": "4420", 00:21:39.730 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:39.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.730 "prchk_reftag": false, 00:21:39.730 "prchk_guard": false, 00:21:39.730 "hdgst": false, 00:21:39.730 "ddgst": false, 00:21:39.730 "psk": "/tmp/tmp.uQSNRIhx93", 00:21:39.730 "method": "bdev_nvme_attach_controller", 00:21:39.730 "req_id": 1 00:21:39.730 } 00:21:39.730 Got JSON-RPC error response 00:21:39.730 response: 00:21:39.730 { 00:21:39.730 "code": -5, 00:21:39.730 "message": "Input/output error" 00:21:39.730 } 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1027800 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027800 ']' 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027800 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027800 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027800' 00:21:39.730 killing process with pid 1027800 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027800 00:21:39.730 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.730 00:21:39.730 Latency(us) 00:21:39.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.730 =================================================================================================================== 00:21:39.730 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.730 [2024-07-22 12:16:47.472566] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.730 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027800 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1027933 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1027933 /var/tmp/bdevperf.sock 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027933 ']' 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.989 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.989 [2024-07-22 12:16:47.706587] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:39.989 [2024-07-22 12:16:47.706688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027933 ] 00:21:39.989 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.989 [2024-07-22 12:16:47.738156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.989 [2024-07-22 12:16:47.765610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.989 [2024-07-22 12:16:47.853445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.247 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.247 12:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:40.247 12:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:40.506 [2024-07-22 12:16:48.199223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:40.506 [2024-07-22 12:16:48.201039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183cde0 (9): Bad file descriptor 00:21:40.506 [2024-07-22 12:16:48.202036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:40.506 [2024-07-22 12:16:48.202057] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:40.506 [2024-07-22 12:16:48.202074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.506 request: 00:21:40.506 { 00:21:40.506 "name": "TLSTEST", 00:21:40.506 "trtype": "tcp", 00:21:40.506 "traddr": "10.0.0.2", 00:21:40.506 "adrfam": "ipv4", 00:21:40.506 "trsvcid": "4420", 00:21:40.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.506 "prchk_reftag": false, 00:21:40.506 "prchk_guard": false, 00:21:40.506 "hdgst": false, 00:21:40.506 "ddgst": false, 00:21:40.506 "method": "bdev_nvme_attach_controller", 00:21:40.506 "req_id": 1 00:21:40.506 } 00:21:40.506 Got JSON-RPC error response 00:21:40.506 response: 00:21:40.506 { 00:21:40.506 "code": -5, 00:21:40.506 "message": "Input/output error" 00:21:40.506 } 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1027933 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027933 ']' 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027933 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027933 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027933' 00:21:40.506 killing process with pid 1027933 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027933 00:21:40.506 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.506 00:21:40.506 Latency(us) 00:21:40.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.506 =================================================================================================================== 00:21:40.506 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:40.506 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027933 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1024561 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1024561 ']' 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1024561 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024561 00:21:40.764 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.765 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.765 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024561' 00:21:40.765 killing process with pid 1024561 00:21:40.765 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1024561 00:21:40.765 [2024-07-22 12:16:48.496886] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.765 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1024561 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.fSldM0Xo6C 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.fSldM0Xo6C 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1028088 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1028088 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1028088 ']' 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.022 12:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.022 [2024-07-22 12:16:48.845176] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:41.022 [2024-07-22 12:16:48.845267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.022 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.023 [2024-07-22 12:16:48.881223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.023 [2024-07-22 12:16:48.907826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.279 [2024-07-22 12:16:48.991793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.279 [2024-07-22 12:16:48.991850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.279 [2024-07-22 12:16:48.991873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.279 [2024-07-22 12:16:48.991883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.279 [2024-07-22 12:16:48.991893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.279 [2024-07-22 12:16:48.991925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.fSldM0Xo6C 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fSldM0Xo6C 00:21:41.279 12:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.535 [2024-07-22 12:16:49.345826] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.535 12:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:41.793 12:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:42.092 [2024-07-22 12:16:49.831193] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.092 [2024-07-22 12:16:49.831458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.092 12:16:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:42.351 malloc0 00:21:42.351 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.610 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:21:42.868 [2024-07-22 12:16:50.585682] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fSldM0Xo6C 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fSldM0Xo6C' 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1028362 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1028362 /var/tmp/bdevperf.sock 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1028362 ']' 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.868 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.868 [2024-07-22 12:16:50.645717] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:42.868 [2024-07-22 12:16:50.645805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028362 ] 00:21:42.868 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.868 [2024-07-22 12:16:50.677555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:42.868 [2024-07-22 12:16:50.704861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.868 [2024-07-22 12:16:50.790504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.126 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.126 12:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:43.126 12:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:21:43.384 [2024-07-22 12:16:51.129716] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.384 [2024-07-22 12:16:51.129840] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:43.384 TLSTESTn1 00:21:43.384 12:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:43.641 Running I/O for 10 seconds... 00:21:53.642 00:21:53.642 Latency(us) 00:21:53.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.642 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:53.642 Verification LBA range: start 0x0 length 0x2000 00:21:53.642 TLSTESTn1 : 10.03 3373.90 13.18 0.00 0.00 37848.89 6407.96 52428.80 00:21:53.642 =================================================================================================================== 00:21:53.642 Total : 3373.90 13.18 0.00 0.00 37848.89 6407.96 52428.80 00:21:53.642 0 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1028362 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1028362 ']' 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1028362 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1028362 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1028362' 00:21:53.642 killing process with pid 1028362 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1028362 00:21:53.642 Received shutdown signal, test time was about 10.000000 seconds 00:21:53.642 00:21:53.642 Latency(us) 00:21:53.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.642 =================================================================================================================== 00:21:53.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.642 [2024-07-22 12:17:01.437333] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:53.642 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1028362 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.fSldM0Xo6C 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fSldM0Xo6C 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fSldM0Xo6C 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fSldM0Xo6C 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fSldM0Xo6C' 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1029563 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1029563 /var/tmp/bdevperf.sock 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1029563 ']' 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.901 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.901 [2024-07-22 12:17:01.713190] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:53.901 [2024-07-22 12:17:01.713274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029563 ] 00:21:53.901 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.901 [2024-07-22 12:17:01.745550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:53.901 [2024-07-22 12:17:01.773719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.159 [2024-07-22 12:17:01.862342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.159 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.159 12:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:54.159 12:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:21:54.419 [2024-07-22 12:17:02.244894] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.419 [2024-07-22 12:17:02.244989] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:54.419 [2024-07-22 12:17:02.245004] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.fSldM0Xo6C 00:21:54.419 request: 00:21:54.419 { 00:21:54.419 "name": "TLSTEST", 00:21:54.419 "trtype": "tcp", 00:21:54.419 "traddr": "10.0.0.2", 00:21:54.419 "adrfam": "ipv4", 00:21:54.419 "trsvcid": "4420", 00:21:54.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.419 "prchk_reftag": false, 00:21:54.419 "prchk_guard": false, 00:21:54.419 "hdgst": false, 00:21:54.419 "ddgst": false, 00:21:54.419 "psk": "/tmp/tmp.fSldM0Xo6C", 00:21:54.419 "method": "bdev_nvme_attach_controller", 00:21:54.419 "req_id": 1 00:21:54.419 } 00:21:54.419 Got JSON-RPC error response 00:21:54.419 response: 00:21:54.419 { 00:21:54.419 "code": -1, 00:21:54.419 "message": "Operation not permitted" 00:21:54.419 } 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1029563 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1029563 ']' 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1029563 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1029563 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1029563' 00:21:54.419 killing process with pid 1029563 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1029563 00:21:54.419 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.419 00:21:54.419 Latency(us) 00:21:54.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.419 =================================================================================================================== 00:21:54.419 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:54.419 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1029563 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1028088 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1028088 ']' 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1028088 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1028088 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1028088' 00:21:54.679 killing process with pid 1028088 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1028088 00:21:54.679 [2024-07-22 12:17:02.541100] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:54.679 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1028088 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1029706 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1029706 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1029706 ']' 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.981 12:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.981 [2024-07-22 12:17:02.834825] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:54.981 [2024-07-22 12:17:02.834922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.981 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.981 [2024-07-22 12:17:02.871387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:54.981 [2024-07-22 12:17:02.903307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.261 [2024-07-22 12:17:02.996671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.261 [2024-07-22 12:17:02.996734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.261 [2024-07-22 12:17:02.996750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.261 [2024-07-22 12:17:02.996764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.261 [2024-07-22 12:17:02.996776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.261 [2024-07-22 12:17:02.996807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.fSldM0Xo6C 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fSldM0Xo6C 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.fSldM0Xo6C 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fSldM0Xo6C 00:21:55.261 12:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:55.518 [2024-07-22 12:17:03.413058] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.518 12:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:55.775 12:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:56.033 [2024-07-22 12:17:03.906447] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.033 [2024-07-22 12:17:03.906719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.033 12:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.289 malloc0 00:21:56.289 12:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:21:56.855 [2024-07-22 12:17:04.760691] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:56.855 [2024-07-22 12:17:04.760733] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:56.855 [2024-07-22 12:17:04.760765] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:56.855 request: 00:21:56.855 { 00:21:56.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.855 "host": "nqn.2016-06.io.spdk:host1", 00:21:56.855 "psk": "/tmp/tmp.fSldM0Xo6C", 00:21:56.855 "method": "nvmf_subsystem_add_host", 00:21:56.855 "req_id": 1 00:21:56.855 } 00:21:56.855 Got JSON-RPC error response 00:21:56.855 response: 00:21:56.855 { 00:21:56.855 "code": -32603, 00:21:56.855 "message": "Internal error" 00:21:56.855 } 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1029706 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1029706 ']' 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1029706 00:21:56.855 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1029706 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1029706' 00:21:57.115 killing process with pid 1029706 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1029706 00:21:57.115 12:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1029706 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.fSldM0Xo6C 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1030005 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1030005 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1030005 ']' 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.373 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.373 [2024-07-22 12:17:05.114105] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:57.373 [2024-07-22 12:17:05.114204] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.373 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.373 [2024-07-22 12:17:05.152584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:57.373 [2024-07-22 12:17:05.185146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.373 [2024-07-22 12:17:05.274523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.373 [2024-07-22 12:17:05.274590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.373 [2024-07-22 12:17:05.274624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.373 [2024-07-22 12:17:05.274639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.373 [2024-07-22 12:17:05.274652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.373 [2024-07-22 12:17:05.274697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.fSldM0Xo6C 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fSldM0Xo6C 00:21:57.632 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:57.890 [2024-07-22 12:17:05.638919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.890 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.147 12:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.404 [2024-07-22 12:17:06.132332] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.404 [2024-07-22 12:17:06.132588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.404 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:58.661 malloc0 00:21:58.661 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:58.918 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:21:59.175 [2024-07-22 12:17:06.885865] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1030279 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1030279 /var/tmp/bdevperf.sock 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1030279 ']' 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.175 12:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.175 [2024-07-22 12:17:06.948042] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:21:59.175 [2024-07-22 12:17:06.948131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030279 ] 00:21:59.175 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.175 [2024-07-22 12:17:06.979588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:59.175 [2024-07-22 12:17:07.006103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.175 [2024-07-22 12:17:07.090786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.433 12:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.433 12:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:59.433 12:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:21:59.691 [2024-07-22 12:17:07.421863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.691 [2024-07-22 12:17:07.422001] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:59.691 TLSTESTn1 00:21:59.691 12:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:59.949 12:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:59.949 "subsystems": [ 00:21:59.949 { 00:21:59.949 "subsystem": "keyring", 00:21:59.949 "config": [] 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "subsystem": "iobuf", 00:21:59.949 "config": [ 00:21:59.949 { 00:21:59.949 "method": "iobuf_set_options", 00:21:59.949 "params": { 00:21:59.949 "small_pool_count": 8192, 00:21:59.949 "large_pool_count": 1024, 00:21:59.949 "small_bufsize": 8192, 00:21:59.949 "large_bufsize": 135168 00:21:59.949 } 00:21:59.949 } 00:21:59.949 ] 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "subsystem": "sock", 00:21:59.949 "config": [ 00:21:59.949 { 00:21:59.949 "method": "sock_set_default_impl", 00:21:59.949 "params": { 00:21:59.949 "impl_name": "posix" 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "sock_impl_set_options", 00:21:59.949 "params": { 00:21:59.949 "impl_name": "ssl", 00:21:59.949 "recv_buf_size": 4096, 00:21:59.949 "send_buf_size": 4096, 00:21:59.949 "enable_recv_pipe": true, 00:21:59.949 "enable_quickack": false, 00:21:59.949 "enable_placement_id": 0, 00:21:59.949 "enable_zerocopy_send_server": true, 00:21:59.949 "enable_zerocopy_send_client": false, 00:21:59.949 "zerocopy_threshold": 0, 00:21:59.949 "tls_version": 0, 00:21:59.949 "enable_ktls": false 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "sock_impl_set_options", 00:21:59.949 "params": { 00:21:59.949 "impl_name": "posix", 00:21:59.949 "recv_buf_size": 2097152, 00:21:59.949 "send_buf_size": 2097152, 00:21:59.949 "enable_recv_pipe": true, 00:21:59.949 "enable_quickack": false, 00:21:59.949 "enable_placement_id": 0, 00:21:59.949 "enable_zerocopy_send_server": true, 00:21:59.949 "enable_zerocopy_send_client": false, 00:21:59.949 "zerocopy_threshold": 0, 00:21:59.949 "tls_version": 0, 00:21:59.949 "enable_ktls": false 00:21:59.949 } 00:21:59.949 } 00:21:59.949 ] 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "subsystem": "vmd", 00:21:59.949 "config": [] 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "subsystem": "accel", 00:21:59.949 "config": [ 00:21:59.949 { 00:21:59.949 "method": "accel_set_options", 00:21:59.949 "params": { 00:21:59.949 "small_cache_size": 128, 00:21:59.949 "large_cache_size": 16, 00:21:59.949 "task_count": 2048, 00:21:59.949 "sequence_count": 2048, 00:21:59.949 "buf_count": 2048 00:21:59.949 } 00:21:59.949 } 00:21:59.949 ] 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "subsystem": "bdev", 00:21:59.949 "config": [ 00:21:59.949 { 00:21:59.949 "method": "bdev_set_options", 00:21:59.949 "params": { 00:21:59.949 "bdev_io_pool_size": 65535, 00:21:59.949 "bdev_io_cache_size": 256, 00:21:59.949 "bdev_auto_examine": true, 00:21:59.949 "iobuf_small_cache_size": 128, 00:21:59.949 "iobuf_large_cache_size": 16 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "bdev_raid_set_options", 00:21:59.949 "params": { 00:21:59.949 "process_window_size_kb": 1024, 00:21:59.949 "process_max_bandwidth_mb_sec": 0 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "bdev_iscsi_set_options", 00:21:59.949 "params": { 00:21:59.949 "timeout_sec": 30 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "bdev_nvme_set_options", 00:21:59.949 "params": { 00:21:59.949 "action_on_timeout": "none", 00:21:59.949 "timeout_us": 0, 00:21:59.949 "timeout_admin_us": 0, 00:21:59.949 "keep_alive_timeout_ms": 10000, 00:21:59.949 "arbitration_burst": 0, 00:21:59.949 "low_priority_weight": 0, 00:21:59.949 "medium_priority_weight": 0, 00:21:59.949 "high_priority_weight": 0, 00:21:59.949 "nvme_adminq_poll_period_us": 10000, 00:21:59.949 "nvme_ioq_poll_period_us": 0, 00:21:59.949 "io_queue_requests": 0, 00:21:59.949 "delay_cmd_submit": true, 00:21:59.949 "transport_retry_count": 4, 00:21:59.949 "bdev_retry_count": 3, 00:21:59.949 "transport_ack_timeout": 0, 00:21:59.949 "ctrlr_loss_timeout_sec": 0, 00:21:59.949 "reconnect_delay_sec": 0, 00:21:59.949 "fast_io_fail_timeout_sec": 0, 00:21:59.949 "disable_auto_failback": false, 00:21:59.949 "generate_uuids": false, 00:21:59.949 "transport_tos": 0, 00:21:59.949 "nvme_error_stat": false, 00:21:59.949 "rdma_srq_size": 0, 00:21:59.949 "io_path_stat": false, 00:21:59.949 "allow_accel_sequence": false, 00:21:59.949 "rdma_max_cq_size": 0, 00:21:59.949 "rdma_cm_event_timeout_ms": 0, 00:21:59.949 "dhchap_digests": [ 00:21:59.949 "sha256", 00:21:59.949 "sha384", 00:21:59.949 "sha512" 00:21:59.949 ], 00:21:59.949 "dhchap_dhgroups": [ 00:21:59.949 "null", 00:21:59.949 "ffdhe2048", 00:21:59.949 "ffdhe3072", 00:21:59.949 "ffdhe4096", 00:21:59.949 "ffdhe6144", 00:21:59.949 "ffdhe8192" 00:21:59.949 ] 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "bdev_nvme_set_hotplug", 00:21:59.949 "params": { 00:21:59.949 "period_us": 100000, 00:21:59.949 "enable": false 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.949 "method": "bdev_malloc_create", 00:21:59.949 "params": { 00:21:59.949 "name": "malloc0", 00:21:59.949 "num_blocks": 8192, 00:21:59.949 "block_size": 4096, 00:21:59.949 "physical_block_size": 4096, 00:21:59.949 "uuid": "d01b9d07-79c4-4821-b85f-5c8eabad338d", 00:21:59.949 "optimal_io_boundary": 0 00:21:59.949 } 00:21:59.949 }, 00:21:59.949 { 00:21:59.950 "method": "bdev_wait_for_examine" 00:21:59.950 } 00:21:59.950 ] 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "subsystem": "nbd", 00:21:59.950 "config": [] 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "subsystem": "scheduler", 00:21:59.950 "config": [ 00:21:59.950 { 00:21:59.950 "method": "framework_set_scheduler", 00:21:59.950 "params": { 00:21:59.950 "name": "static" 00:21:59.950 } 00:21:59.950 } 00:21:59.950 ] 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "subsystem": "nvmf", 00:21:59.950 "config": [ 00:21:59.950 { 00:21:59.950 "method": "nvmf_set_config", 00:21:59.950 "params": { 00:21:59.950 "discovery_filter": "match_any", 00:21:59.950 "admin_cmd_passthru": { 00:21:59.950 "identify_ctrlr": false 00:21:59.950 } 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_set_max_subsystems", 00:21:59.950 "params": { 00:21:59.950 "max_subsystems": 1024 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_set_crdt", 00:21:59.950 "params": { 00:21:59.950 "crdt1": 0, 00:21:59.950 "crdt2": 0, 00:21:59.950 "crdt3": 0 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_create_transport", 00:21:59.950 "params": { 00:21:59.950 "trtype": "TCP", 00:21:59.950 "max_queue_depth": 128, 00:21:59.950 "max_io_qpairs_per_ctrlr": 127, 00:21:59.950 "in_capsule_data_size": 4096, 00:21:59.950 "max_io_size": 131072, 00:21:59.950 "io_unit_size": 131072, 00:21:59.950 "max_aq_depth": 128, 00:21:59.950 "num_shared_buffers": 511, 00:21:59.950 "buf_cache_size": 4294967295, 00:21:59.950 "dif_insert_or_strip": false, 00:21:59.950 "zcopy": false, 00:21:59.950 "c2h_success": false, 00:21:59.950 "sock_priority": 0, 00:21:59.950 "abort_timeout_sec": 1, 00:21:59.950 "ack_timeout": 0, 00:21:59.950 "data_wr_pool_size": 0 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_create_subsystem", 00:21:59.950 "params": { 00:21:59.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.950 "allow_any_host": false, 00:21:59.950 "serial_number": "SPDK00000000000001", 00:21:59.950 "model_number": "SPDK bdev Controller", 00:21:59.950 "max_namespaces": 10, 00:21:59.950 "min_cntlid": 1, 00:21:59.950 "max_cntlid": 65519, 00:21:59.950 "ana_reporting": false 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_subsystem_add_host", 00:21:59.950 "params": { 00:21:59.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.950 "host": "nqn.2016-06.io.spdk:host1", 00:21:59.950 "psk": "/tmp/tmp.fSldM0Xo6C" 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_subsystem_add_ns", 00:21:59.950 "params": { 00:21:59.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.950 "namespace": { 00:21:59.950 "nsid": 1, 00:21:59.950 "bdev_name": "malloc0", 00:21:59.950 "nguid": "D01B9D0779C44821B85F5C8EABAD338D", 00:21:59.950 "uuid": "d01b9d07-79c4-4821-b85f-5c8eabad338d", 00:21:59.950 "no_auto_visible": false 00:21:59.950 } 00:21:59.950 } 00:21:59.950 }, 00:21:59.950 { 00:21:59.950 "method": "nvmf_subsystem_add_listener", 00:21:59.950 "params": { 00:21:59.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.950 "listen_address": { 00:21:59.950 "trtype": "TCP", 00:21:59.950 "adrfam": "IPv4", 00:21:59.950 "traddr": "10.0.0.2", 00:21:59.950 "trsvcid": "4420" 00:21:59.950 }, 00:21:59.950 "secure_channel": true 00:21:59.950 } 00:21:59.950 } 00:21:59.950 ] 00:21:59.950 } 00:21:59.950 ] 00:21:59.950 }' 00:21:59.950 12:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:00.209 12:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:00.209 "subsystems": [ 00:22:00.209 { 00:22:00.209 "subsystem": "keyring", 00:22:00.209 "config": [] 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "subsystem": "iobuf", 00:22:00.209 "config": [ 00:22:00.209 { 00:22:00.209 "method": "iobuf_set_options", 00:22:00.209 "params": { 00:22:00.209 "small_pool_count": 8192, 00:22:00.209 "large_pool_count": 1024, 00:22:00.209 "small_bufsize": 8192, 00:22:00.209 "large_bufsize": 135168 00:22:00.209 } 00:22:00.209 } 00:22:00.209 ] 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "subsystem": "sock", 00:22:00.209 "config": [ 00:22:00.209 { 00:22:00.209 "method": "sock_set_default_impl", 00:22:00.209 "params": { 00:22:00.209 "impl_name": "posix" 00:22:00.209 } 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "method": "sock_impl_set_options", 00:22:00.209 "params": { 00:22:00.209 "impl_name": "ssl", 00:22:00.209 "recv_buf_size": 4096, 00:22:00.209 "send_buf_size": 4096, 00:22:00.209 "enable_recv_pipe": true, 00:22:00.209 "enable_quickack": false, 00:22:00.209 "enable_placement_id": 0, 00:22:00.209 "enable_zerocopy_send_server": true, 00:22:00.209 "enable_zerocopy_send_client": false, 00:22:00.209 "zerocopy_threshold": 0, 00:22:00.209 "tls_version": 0, 00:22:00.209 "enable_ktls": false 00:22:00.209 } 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "method": "sock_impl_set_options", 00:22:00.209 "params": { 00:22:00.209 "impl_name": "posix", 00:22:00.209 "recv_buf_size": 2097152, 00:22:00.209 "send_buf_size": 2097152, 00:22:00.209 "enable_recv_pipe": true, 00:22:00.209 "enable_quickack": false, 00:22:00.209 "enable_placement_id": 0, 00:22:00.209 "enable_zerocopy_send_server": true, 00:22:00.209 "enable_zerocopy_send_client": false, 00:22:00.209 "zerocopy_threshold": 0, 00:22:00.209 "tls_version": 0, 00:22:00.209 "enable_ktls": false 00:22:00.209 } 00:22:00.209 } 00:22:00.209 ] 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "subsystem": "vmd", 00:22:00.209 "config": [] 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "subsystem": "accel", 00:22:00.209 "config": [ 00:22:00.209 { 00:22:00.209 "method": "accel_set_options", 00:22:00.209 "params": { 00:22:00.209 "small_cache_size": 128, 00:22:00.209 "large_cache_size": 16, 00:22:00.209 "task_count": 2048, 00:22:00.209 "sequence_count": 2048, 00:22:00.209 "buf_count": 2048 00:22:00.209 } 00:22:00.209 } 00:22:00.209 ] 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "subsystem": "bdev", 00:22:00.209 "config": [ 00:22:00.209 { 00:22:00.209 "method": "bdev_set_options", 00:22:00.209 "params": { 00:22:00.209 "bdev_io_pool_size": 65535, 00:22:00.209 "bdev_io_cache_size": 256, 00:22:00.209 "bdev_auto_examine": true, 00:22:00.209 "iobuf_small_cache_size": 128, 00:22:00.209 "iobuf_large_cache_size": 16 00:22:00.209 } 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "method": "bdev_raid_set_options", 00:22:00.209 "params": { 00:22:00.209 "process_window_size_kb": 1024, 00:22:00.209 "process_max_bandwidth_mb_sec": 0 00:22:00.209 } 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "method": "bdev_iscsi_set_options", 00:22:00.209 "params": { 00:22:00.209 "timeout_sec": 30 00:22:00.209 } 00:22:00.209 }, 00:22:00.209 { 00:22:00.209 "method": "bdev_nvme_set_options", 00:22:00.209 "params": { 00:22:00.209 "action_on_timeout": "none", 00:22:00.209 "timeout_us": 0, 00:22:00.209 "timeout_admin_us": 0, 00:22:00.209 "keep_alive_timeout_ms": 10000, 00:22:00.209 "arbitration_burst": 0, 00:22:00.209 "low_priority_weight": 0, 00:22:00.209 "medium_priority_weight": 0, 00:22:00.209 "high_priority_weight": 0, 00:22:00.209 "nvme_adminq_poll_period_us": 10000, 00:22:00.209 "nvme_ioq_poll_period_us": 0, 00:22:00.210 "io_queue_requests": 512, 00:22:00.210 "delay_cmd_submit": true, 00:22:00.210 "transport_retry_count": 4, 00:22:00.210 "bdev_retry_count": 3, 00:22:00.210 "transport_ack_timeout": 0, 00:22:00.210 "ctrlr_loss_timeout_sec": 0, 00:22:00.210 "reconnect_delay_sec": 0, 00:22:00.210 "fast_io_fail_timeout_sec": 0, 00:22:00.210 "disable_auto_failback": false, 00:22:00.210 "generate_uuids": false, 00:22:00.210 "transport_tos": 0, 00:22:00.210 "nvme_error_stat": false, 00:22:00.210 "rdma_srq_size": 0, 00:22:00.210 "io_path_stat": false, 00:22:00.210 "allow_accel_sequence": false, 00:22:00.210 "rdma_max_cq_size": 0, 00:22:00.210 "rdma_cm_event_timeout_ms": 0, 00:22:00.210 "dhchap_digests": [ 00:22:00.210 "sha256", 00:22:00.210 "sha384", 00:22:00.210 "sha512" 00:22:00.210 ], 00:22:00.210 "dhchap_dhgroups": [ 00:22:00.210 "null", 00:22:00.210 "ffdhe2048", 00:22:00.210 "ffdhe3072", 00:22:00.210 "ffdhe4096", 00:22:00.210 "ffdhe6144", 00:22:00.210 "ffdhe8192" 00:22:00.210 ] 00:22:00.210 } 00:22:00.210 }, 00:22:00.210 { 00:22:00.210 "method": "bdev_nvme_attach_controller", 00:22:00.210 "params": { 00:22:00.210 "name": "TLSTEST", 00:22:00.210 "trtype": "TCP", 00:22:00.210 "adrfam": "IPv4", 00:22:00.210 "traddr": "10.0.0.2", 00:22:00.210 "trsvcid": "4420", 00:22:00.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.210 "prchk_reftag": false, 00:22:00.210 "prchk_guard": false, 00:22:00.210 "ctrlr_loss_timeout_sec": 0, 00:22:00.210 "reconnect_delay_sec": 0, 00:22:00.210 "fast_io_fail_timeout_sec": 0, 00:22:00.210 "psk": "/tmp/tmp.fSldM0Xo6C", 00:22:00.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.210 "hdgst": false, 00:22:00.210 "ddgst": false 00:22:00.210 } 00:22:00.210 }, 00:22:00.210 { 00:22:00.210 "method": "bdev_nvme_set_hotplug", 00:22:00.210 "params": { 00:22:00.210 "period_us": 100000, 00:22:00.210 "enable": false 00:22:00.210 } 00:22:00.210 }, 00:22:00.210 { 00:22:00.210 "method": "bdev_wait_for_examine" 00:22:00.210 } 00:22:00.210 ] 00:22:00.210 }, 00:22:00.210 { 00:22:00.210 "subsystem": "nbd", 00:22:00.210 "config": [] 00:22:00.210 } 00:22:00.210 ] 00:22:00.210 }' 00:22:00.210 12:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1030279 00:22:00.210 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1030279 ']' 00:22:00.210 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1030279 00:22:00.210 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030279 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030279' 00:22:00.468 killing process with pid 1030279 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1030279 00:22:00.468 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.468 00:22:00.468 Latency(us) 00:22:00.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.468 =================================================================================================================== 00:22:00.468 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.468 [2024-07-22 12:17:08.170024] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1030279 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1030005 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1030005 ']' 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1030005 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.468 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030005 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030005' 00:22:00.727 killing process with pid 1030005 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1030005 00:22:00.727 [2024-07-22 12:17:08.413652] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1030005 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.727 12:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:00.727 "subsystems": [ 00:22:00.727 { 00:22:00.727 "subsystem": "keyring", 00:22:00.727 "config": [] 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "subsystem": "iobuf", 00:22:00.727 "config": [ 00:22:00.727 { 00:22:00.727 "method": "iobuf_set_options", 00:22:00.727 "params": { 00:22:00.727 "small_pool_count": 8192, 00:22:00.727 "large_pool_count": 1024, 00:22:00.727 "small_bufsize": 8192, 00:22:00.727 "large_bufsize": 135168 00:22:00.727 } 00:22:00.727 } 00:22:00.727 ] 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "subsystem": "sock", 00:22:00.727 "config": [ 00:22:00.727 { 00:22:00.727 "method": "sock_set_default_impl", 00:22:00.727 "params": { 00:22:00.727 "impl_name": "posix" 00:22:00.727 } 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "method": "sock_impl_set_options", 00:22:00.727 "params": { 00:22:00.727 "impl_name": "ssl", 00:22:00.727 "recv_buf_size": 4096, 00:22:00.727 "send_buf_size": 4096, 00:22:00.727 "enable_recv_pipe": true, 00:22:00.727 "enable_quickack": false, 00:22:00.727 "enable_placement_id": 0, 00:22:00.727 "enable_zerocopy_send_server": true, 00:22:00.727 "enable_zerocopy_send_client": false, 00:22:00.727 "zerocopy_threshold": 0, 00:22:00.727 "tls_version": 0, 00:22:00.727 "enable_ktls": false 00:22:00.727 } 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "method": "sock_impl_set_options", 00:22:00.727 "params": { 00:22:00.727 "impl_name": "posix", 00:22:00.727 "recv_buf_size": 2097152, 00:22:00.727 "send_buf_size": 2097152, 00:22:00.727 "enable_recv_pipe": true, 00:22:00.727 "enable_quickack": false, 00:22:00.727 "enable_placement_id": 0, 00:22:00.727 "enable_zerocopy_send_server": true, 00:22:00.727 "enable_zerocopy_send_client": false, 00:22:00.727 "zerocopy_threshold": 0, 00:22:00.727 "tls_version": 0, 00:22:00.727 "enable_ktls": false 00:22:00.727 } 00:22:00.727 } 00:22:00.727 ] 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "subsystem": "vmd", 00:22:00.727 "config": [] 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "subsystem": "accel", 00:22:00.727 "config": [ 00:22:00.727 { 00:22:00.727 "method": "accel_set_options", 00:22:00.727 "params": { 00:22:00.727 "small_cache_size": 128, 00:22:00.727 "large_cache_size": 16, 00:22:00.727 "task_count": 2048, 00:22:00.727 "sequence_count": 2048, 00:22:00.727 "buf_count": 2048 00:22:00.727 } 00:22:00.727 } 00:22:00.727 ] 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "subsystem": "bdev", 00:22:00.727 "config": [ 00:22:00.727 { 00:22:00.727 "method": "bdev_set_options", 00:22:00.727 "params": { 00:22:00.727 "bdev_io_pool_size": 65535, 00:22:00.727 "bdev_io_cache_size": 256, 00:22:00.727 "bdev_auto_examine": true, 00:22:00.727 "iobuf_small_cache_size": 128, 00:22:00.727 "iobuf_large_cache_size": 16 00:22:00.727 } 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "method": "bdev_raid_set_options", 00:22:00.727 "params": { 00:22:00.727 "process_window_size_kb": 1024, 00:22:00.727 "process_max_bandwidth_mb_sec": 0 00:22:00.727 } 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "method": "bdev_iscsi_set_options", 00:22:00.727 "params": { 00:22:00.727 "timeout_sec": 30 00:22:00.727 } 00:22:00.727 }, 00:22:00.727 { 00:22:00.727 "method": "bdev_nvme_set_options", 00:22:00.727 "params": { 00:22:00.727 "action_on_timeout": "none", 00:22:00.727 "timeout_us": 0, 00:22:00.727 "timeout_admin_us": 0, 00:22:00.727 "keep_alive_timeout_ms": 10000, 00:22:00.727 "arbitration_burst": 0, 00:22:00.727 "low_priority_weight": 0, 00:22:00.727 "medium_priority_weight": 0, 00:22:00.727 "high_priority_weight": 0, 00:22:00.727 "nvme_adminq_poll_period_us": 10000, 00:22:00.727 "nvme_ioq_poll_period_us": 0, 00:22:00.727 "io_queue_requests": 0, 00:22:00.727 "delay_cmd_submit": true, 00:22:00.727 "transport_retry_count": 4, 00:22:00.727 "bdev_retry_count": 3, 00:22:00.727 "transport_ack_timeout": 0, 00:22:00.727 "ctrlr_loss_timeout_sec": 0, 00:22:00.727 "reconnect_delay_sec": 0, 00:22:00.727 "fast_io_fail_timeout_sec": 0, 00:22:00.727 "disable_auto_failback": false, 00:22:00.727 "generate_uuids": false, 00:22:00.727 "transport_tos": 0, 00:22:00.727 "nvme_error_stat": false, 00:22:00.727 "rdma_srq_size": 0, 00:22:00.728 "io_path_stat": false, 00:22:00.728 "allow_accel_sequence": false, 00:22:00.728 "rdma_max_cq_size": 0, 00:22:00.728 "rdma_cm_event_timeout_ms": 0, 00:22:00.728 "dhchap_digests": [ 00:22:00.728 "sha256", 00:22:00.728 "sha384", 00:22:00.728 "sha512" 00:22:00.728 ], 00:22:00.728 "dhchap_dhgroups": [ 00:22:00.728 "null", 00:22:00.728 "ffdhe2048", 00:22:00.728 "ffdhe3072", 00:22:00.728 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.728 "ffdhe4096", 00:22:00.728 "ffdhe6144", 00:22:00.728 "ffdhe8192" 00:22:00.728 ] 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "bdev_nvme_set_hotplug", 00:22:00.728 "params": { 00:22:00.728 "period_us": 100000, 00:22:00.728 "enable": false 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "bdev_malloc_create", 00:22:00.728 "params": { 00:22:00.728 "name": "malloc0", 00:22:00.728 "num_blocks": 8192, 00:22:00.728 "block_size": 4096, 00:22:00.728 "physical_block_size": 4096, 00:22:00.728 "uuid": "d01b9d07-79c4-4821-b85f-5c8eabad338d", 00:22:00.728 "optimal_io_boundary": 0 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "bdev_wait_for_examine" 00:22:00.728 } 00:22:00.728 ] 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "subsystem": "nbd", 00:22:00.728 "config": [] 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "subsystem": "scheduler", 00:22:00.728 "config": [ 00:22:00.728 { 00:22:00.728 "method": "framework_set_scheduler", 00:22:00.728 "params": { 00:22:00.728 "name": "static" 00:22:00.728 } 00:22:00.728 } 00:22:00.728 ] 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "subsystem": "nvmf", 00:22:00.728 "config": [ 00:22:00.728 { 00:22:00.728 "method": "nvmf_set_config", 00:22:00.728 "params": { 00:22:00.728 "discovery_filter": "match_any", 00:22:00.728 "admin_cmd_passthru": { 00:22:00.728 "identify_ctrlr": false 00:22:00.728 } 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_set_max_subsystems", 00:22:00.728 "params": { 00:22:00.728 "max_subsystems": 1024 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_set_crdt", 00:22:00.728 "params": { 00:22:00.728 "crdt1": 0, 00:22:00.728 "crdt2": 0, 00:22:00.728 "crdt3": 0 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_create_transport", 00:22:00.728 "params": { 00:22:00.728 "trtype": "TCP", 00:22:00.728 "max_queue_depth": 128, 00:22:00.728 "max_io_qpairs_per_ctrlr": 127, 00:22:00.728 "in_capsule_data_size": 4096, 00:22:00.728 "max_io_size": 131072, 00:22:00.728 "io_unit_size": 131072, 00:22:00.728 "max_aq_depth": 128, 00:22:00.728 "num_shared_buffers": 511, 00:22:00.728 "buf_cache_size": 4294967295, 00:22:00.728 "dif_insert_or_strip": false, 00:22:00.728 "zcopy": false, 00:22:00.728 "c2h_success": false, 00:22:00.728 "sock_priority": 0, 00:22:00.728 "abort_timeout_sec": 1, 00:22:00.728 "ack_timeout": 0, 00:22:00.728 "data_wr_pool_size": 0 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_create_subsystem", 00:22:00.728 "params": { 00:22:00.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.728 "allow_any_host": false, 00:22:00.728 "serial_number": "SPDK00000000000001", 00:22:00.728 "model_number": "SPDK bdev Controller", 00:22:00.728 "max_namespaces": 10, 00:22:00.728 "min_cntlid": 1, 00:22:00.728 "max_cntlid": 65519, 00:22:00.728 "ana_reporting": false 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_subsystem_add_host", 00:22:00.728 "params": { 00:22:00.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.728 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.728 "psk": "/tmp/tmp.fSldM0Xo6C" 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_subsystem_add_ns", 00:22:00.728 "params": { 00:22:00.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.728 "namespace": { 00:22:00.728 "nsid": 1, 00:22:00.728 "bdev_name": "malloc0", 00:22:00.728 "nguid": "D01B9D0779C44821B85F5C8EABAD338D", 00:22:00.728 "uuid": "d01b9d07-79c4-4821-b85f-5c8eabad338d", 00:22:00.728 "no_auto_visible": false 00:22:00.728 } 00:22:00.728 } 00:22:00.728 }, 00:22:00.728 { 00:22:00.728 "method": "nvmf_subsystem_add_listener", 00:22:00.728 "params": { 00:22:00.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.728 "listen_address": { 00:22:00.728 "trtype": "TCP", 00:22:00.728 "adrfam": "IPv4", 00:22:00.728 "traddr": "10.0.0.2", 00:22:00.728 "trsvcid": "4420" 00:22:00.728 }, 00:22:00.728 "secure_channel": true 00:22:00.728 } 00:22:00.728 } 00:22:00.728 ] 00:22:00.728 } 00:22:00.728 ] 00:22:00.728 }' 00:22:00.728 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1030555 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1030555 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1030555 ']' 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.987 12:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.987 [2024-07-22 12:17:08.707319] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:00.987 [2024-07-22 12:17:08.707422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.987 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.987 [2024-07-22 12:17:08.744250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.987 [2024-07-22 12:17:08.775985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.987 [2024-07-22 12:17:08.864485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.987 [2024-07-22 12:17:08.864543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.987 [2024-07-22 12:17:08.864570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.987 [2024-07-22 12:17:08.864584] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.987 [2024-07-22 12:17:08.864596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.987 [2024-07-22 12:17:08.864693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.246 [2024-07-22 12:17:09.104180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.246 [2024-07-22 12:17:09.127356] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:01.246 [2024-07-22 12:17:09.143427] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.246 [2024-07-22 12:17:09.143681] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1030628 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1030628 /var/tmp/bdevperf.sock 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1030628 ']' 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.812 12:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:01.812 "subsystems": [ 00:22:01.812 { 00:22:01.812 "subsystem": "keyring", 00:22:01.812 "config": [] 00:22:01.812 }, 00:22:01.812 { 00:22:01.812 "subsystem": "iobuf", 00:22:01.812 "config": [ 00:22:01.813 { 00:22:01.813 "method": "iobuf_set_options", 00:22:01.813 "params": { 00:22:01.813 "small_pool_count": 8192, 00:22:01.813 "large_pool_count": 1024, 00:22:01.813 "small_bufsize": 8192, 00:22:01.813 "large_bufsize": 135168 00:22:01.813 } 00:22:01.813 } 00:22:01.813 ] 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "subsystem": "sock", 00:22:01.813 "config": [ 00:22:01.813 { 00:22:01.813 "method": "sock_set_default_impl", 00:22:01.813 "params": { 00:22:01.813 "impl_name": "posix" 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "sock_impl_set_options", 00:22:01.813 "params": { 00:22:01.813 "impl_name": "ssl", 00:22:01.813 "recv_buf_size": 4096, 00:22:01.813 "send_buf_size": 4096, 00:22:01.813 "enable_recv_pipe": true, 00:22:01.813 "enable_quickack": false, 00:22:01.813 "enable_placement_id": 0, 00:22:01.813 "enable_zerocopy_send_server": true, 00:22:01.813 "enable_zerocopy_send_client": false, 00:22:01.813 "zerocopy_threshold": 0, 00:22:01.813 "tls_version": 0, 00:22:01.813 "enable_ktls": false 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "sock_impl_set_options", 00:22:01.813 "params": { 00:22:01.813 "impl_name": "posix", 00:22:01.813 "recv_buf_size": 2097152, 00:22:01.813 "send_buf_size": 2097152, 00:22:01.813 "enable_recv_pipe": true, 00:22:01.813 "enable_quickack": false, 00:22:01.813 "enable_placement_id": 0, 00:22:01.813 "enable_zerocopy_send_server": true, 00:22:01.813 "enable_zerocopy_send_client": false, 00:22:01.813 "zerocopy_threshold": 0, 00:22:01.813 "tls_version": 0, 00:22:01.813 "enable_ktls": false 00:22:01.813 } 00:22:01.813 } 00:22:01.813 ] 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "subsystem": "vmd", 00:22:01.813 "config": [] 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "subsystem": "accel", 00:22:01.813 "config": [ 00:22:01.813 { 00:22:01.813 "method": "accel_set_options", 00:22:01.813 "params": { 00:22:01.813 "small_cache_size": 128, 00:22:01.813 "large_cache_size": 16, 00:22:01.813 "task_count": 2048, 00:22:01.813 "sequence_count": 2048, 00:22:01.813 "buf_count": 2048 00:22:01.813 } 00:22:01.813 } 00:22:01.813 ] 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "subsystem": "bdev", 00:22:01.813 "config": [ 00:22:01.813 { 00:22:01.813 "method": "bdev_set_options", 00:22:01.813 "params": { 00:22:01.813 "bdev_io_pool_size": 65535, 00:22:01.813 "bdev_io_cache_size": 256, 00:22:01.813 "bdev_auto_examine": true, 00:22:01.813 "iobuf_small_cache_size": 128, 00:22:01.813 "iobuf_large_cache_size": 16 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "bdev_raid_set_options", 00:22:01.813 "params": { 00:22:01.813 "process_window_size_kb": 1024, 00:22:01.813 "process_max_bandwidth_mb_sec": 0 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "bdev_iscsi_set_options", 00:22:01.813 "params": { 00:22:01.813 "timeout_sec": 30 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "bdev_nvme_set_options", 00:22:01.813 "params": { 00:22:01.813 "action_on_timeout": "none", 00:22:01.813 "timeout_us": 0, 00:22:01.813 "timeout_admin_us": 0, 00:22:01.813 "keep_alive_timeout_ms": 10000, 00:22:01.813 "arbitration_burst": 0, 00:22:01.813 "low_priority_weight": 0, 00:22:01.813 "medium_priority_weight": 0, 00:22:01.813 "high_priority_weight": 0, 00:22:01.813 "nvme_adminq_poll_period_us": 10000, 00:22:01.813 "nvme_ioq_poll_period_us": 0, 00:22:01.813 "io_queue_requests": 512, 00:22:01.813 "delay_cmd_submit": true, 00:22:01.813 "transport_retry_count": 4, 00:22:01.813 "bdev_retry_count": 3, 00:22:01.813 "transport_ack_timeout": 0, 00:22:01.813 "ctrlr_loss_timeout_sec": 0, 00:22:01.813 "reconnect_delay_sec": 0, 00:22:01.813 "fast_io_fail_timeout_sec": 0, 00:22:01.813 "disable_auto_failback": false, 00:22:01.813 "generate_uuids": false, 00:22:01.813 "transport_tos": 0, 00:22:01.813 "nvme_error_stat": false, 00:22:01.813 "rdma_srq_size": 0, 00:22:01.813 "io_path_stat": false, 00:22:01.813 "allow_accel_sequence": false, 00:22:01.813 "rdma_max_cq_size": 0, 00:22:01.813 "rdma_cm_event_timeout_ms": 0, 00:22:01.813 "dhchap_digests": [ 00:22:01.813 "sha256", 00:22:01.813 "sha384", 00:22:01.813 "sha512" 00:22:01.813 ], 00:22:01.813 "dhchap_dhgroups": [ 00:22:01.813 "null", 00:22:01.813 "ffdhe2048", 00:22:01.813 "ffdhe3072", 00:22:01.813 "ffdhe4096", 00:22:01.813 "ffdhe6144", 00:22:01.813 "ffdhe8192" 00:22:01.813 ] 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "bdev_nvme_attach_controller", 00:22:01.813 "params": { 00:22:01.813 "name": "TLSTEST", 00:22:01.813 "trtype": "TCP", 00:22:01.813 "adrfam": "IPv4", 00:22:01.813 "traddr": "10.0.0.2", 00:22:01.813 "trsvcid": "4420", 00:22:01.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.813 "prchk_reftag": false, 00:22:01.813 "prchk_guard": false, 00:22:01.813 "ctrlr_loss_timeout_sec": 0, 00:22:01.813 "reconnect_delay_sec": 0, 00:22:01.813 "fast_io_fail_timeout_sec": 0, 00:22:01.813 "psk": "/tmp/tmp.fSldM0Xo6C", 00:22:01.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.813 "hdgst": false, 00:22:01.813 "ddgst": false 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "bdev_nvme_set_hotplug", 00:22:01.813 "params": { 00:22:01.813 "period_us": 100000, 00:22:01.813 "enable": false 00:22:01.813 } 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "method": "bdev_wait_for_examine" 00:22:01.813 } 00:22:01.813 ] 00:22:01.813 }, 00:22:01.813 { 00:22:01.813 "subsystem": "nbd", 00:22:01.813 "config": [] 00:22:01.813 } 00:22:01.813 ] 00:22:01.813 }' 00:22:01.813 12:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 [2024-07-22 12:17:09.711890] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:01.813 [2024-07-22 12:17:09.712023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030628 ] 00:22:02.073 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.073 [2024-07-22 12:17:09.747296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.073 [2024-07-22 12:17:09.775507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.073 [2024-07-22 12:17:09.858812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.333 [2024-07-22 12:17:10.029538] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.333 [2024-07-22 12:17:10.029736] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.905 12:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.905 12:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.905 12:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:02.905 Running I/O for 10 seconds... 00:22:15.139 00:22:15.139 Latency(us) 00:22:15.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.139 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:15.139 Verification LBA range: start 0x0 length 0x2000 00:22:15.139 TLSTESTn1 : 10.04 1642.32 6.42 0.00 0.00 77804.58 11116.85 70681.79 00:22:15.139 =================================================================================================================== 00:22:15.139 Total : 1642.32 6.42 0.00 0.00 77804.58 11116.85 70681.79 00:22:15.139 0 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1030628 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1030628 ']' 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1030628 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030628 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030628' 00:22:15.139 killing process with pid 1030628 00:22:15.139 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1030628 00:22:15.139 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.139 00:22:15.139 Latency(us) 00:22:15.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.139 =================================================================================================================== 00:22:15.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.140 [2024-07-22 12:17:20.896476] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:15.140 12:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1030628 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1030555 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1030555 ']' 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1030555 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030555 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030555' 00:22:15.140 killing process with pid 1030555 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1030555 00:22:15.140 [2024-07-22 12:17:21.142813] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1030555 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1032038 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1032038 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1032038 ']' 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.140 [2024-07-22 12:17:21.449422] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:15.140 [2024-07-22 12:17:21.449515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.140 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.140 [2024-07-22 12:17:21.485205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.140 [2024-07-22 12:17:21.516897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.140 [2024-07-22 12:17:21.603539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.140 [2024-07-22 12:17:21.603602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.140 [2024-07-22 12:17:21.603628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.140 [2024-07-22 12:17:21.603643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.140 [2024-07-22 12:17:21.603655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.140 [2024-07-22 12:17:21.603686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.fSldM0Xo6C 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fSldM0Xo6C 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.140 [2024-07-22 12:17:21.969028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.140 12:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:15.140 12:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:15.140 [2024-07-22 12:17:22.498497] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.140 [2024-07-22 12:17:22.498790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.140 12:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:15.140 malloc0 00:22:15.140 12:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:15.397 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fSldM0Xo6C 00:22:15.654 [2024-07-22 12:17:23.348516] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1032288 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1032288 /var/tmp/bdevperf.sock 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1032288 ']' 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.654 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.654 [2024-07-22 12:17:23.411373] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:15.654 [2024-07-22 12:17:23.411463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032288 ] 00:22:15.654 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.654 [2024-07-22 12:17:23.443489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.654 [2024-07-22 12:17:23.475247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.654 [2024-07-22 12:17:23.566810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.911 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.911 12:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:15.911 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fSldM0Xo6C 00:22:16.167 12:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:16.428 [2024-07-22 12:17:24.126634] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.428 nvme0n1 00:22:16.428 12:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:16.428 Running I/O for 1 seconds... 00:22:17.832 00:22:17.832 Latency(us) 00:22:17.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.832 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:17.832 Verification LBA range: start 0x0 length 0x2000 00:22:17.832 nvme0n1 : 1.03 3197.37 12.49 0.00 0.00 39400.40 10922.67 73788.68 00:22:17.832 =================================================================================================================== 00:22:17.832 Total : 3197.37 12.49 0.00 0.00 39400.40 10922.67 73788.68 00:22:17.832 0 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1032288 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1032288 ']' 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1032288 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032288 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032288' 00:22:17.832 killing process with pid 1032288 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1032288 00:22:17.832 Received shutdown signal, test time was about 1.000000 seconds 00:22:17.832 00:22:17.832 Latency(us) 00:22:17.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.832 =================================================================================================================== 00:22:17.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1032288 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1032038 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1032038 ']' 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1032038 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032038 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032038' 00:22:17.832 killing process with pid 1032038 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1032038 00:22:17.832 [2024-07-22 12:17:25.647475] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:17.832 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1032038 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1032619 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1032619 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1032619 ']' 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.091 12:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.091 [2024-07-22 12:17:25.938426] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:18.091 [2024-07-22 12:17:25.938517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.091 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.091 [2024-07-22 12:17:25.975144] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:18.091 [2024-07-22 12:17:26.001609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.350 [2024-07-22 12:17:26.086713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.350 [2024-07-22 12:17:26.086772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.350 [2024-07-22 12:17:26.086801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.350 [2024-07-22 12:17:26.086814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.350 [2024-07-22 12:17:26.086826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.350 [2024-07-22 12:17:26.086863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.350 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.350 [2024-07-22 12:17:26.237291] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.350 malloc0 00:22:18.350 [2024-07-22 12:17:26.270381] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.350 [2024-07-22 12:17:26.270694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1032645 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1032645 /var/tmp/bdevperf.sock 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1032645 ']' 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.608 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.608 [2024-07-22 12:17:26.342114] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:18.608 [2024-07-22 12:17:26.342175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032645 ] 00:22:18.608 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.608 [2024-07-22 12:17:26.374124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:18.608 [2024-07-22 12:17:26.403938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.608 [2024-07-22 12:17:26.495033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.866 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.866 12:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:18.866 12:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fSldM0Xo6C 00:22:19.123 12:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:19.381 [2024-07-22 12:17:27.104265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.381 nvme0n1 00:22:19.381 12:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.381 Running I/O for 1 seconds... 00:22:20.757 00:22:20.757 Latency(us) 00:22:20.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.757 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:20.757 Verification LBA range: start 0x0 length 0x2000 00:22:20.757 nvme0n1 : 1.04 3189.48 12.46 0.00 0.00 39468.59 10874.12 58642.58 00:22:20.757 =================================================================================================================== 00:22:20.757 Total : 3189.48 12.46 0.00 0.00 39468.59 10874.12 58642.58 00:22:20.757 0 00:22:20.757 12:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:20.757 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.757 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.757 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.757 12:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:20.757 "subsystems": [ 00:22:20.757 { 00:22:20.757 "subsystem": "keyring", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "keyring_file_add_key", 00:22:20.757 "params": { 00:22:20.757 "name": "key0", 00:22:20.757 "path": "/tmp/tmp.fSldM0Xo6C" 00:22:20.757 } 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "iobuf", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "iobuf_set_options", 00:22:20.757 "params": { 00:22:20.757 "small_pool_count": 8192, 00:22:20.757 "large_pool_count": 1024, 00:22:20.757 "small_bufsize": 8192, 00:22:20.757 "large_bufsize": 135168 00:22:20.757 } 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "sock", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "sock_set_default_impl", 00:22:20.757 "params": { 00:22:20.757 "impl_name": "posix" 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "sock_impl_set_options", 00:22:20.757 "params": { 00:22:20.757 "impl_name": "ssl", 00:22:20.757 "recv_buf_size": 4096, 00:22:20.757 "send_buf_size": 4096, 00:22:20.757 "enable_recv_pipe": true, 00:22:20.757 "enable_quickack": false, 00:22:20.757 "enable_placement_id": 0, 00:22:20.757 "enable_zerocopy_send_server": true, 00:22:20.757 "enable_zerocopy_send_client": false, 00:22:20.757 "zerocopy_threshold": 0, 00:22:20.757 "tls_version": 0, 00:22:20.757 "enable_ktls": false 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "sock_impl_set_options", 00:22:20.757 "params": { 00:22:20.757 "impl_name": "posix", 00:22:20.757 "recv_buf_size": 2097152, 00:22:20.757 "send_buf_size": 2097152, 00:22:20.757 "enable_recv_pipe": true, 00:22:20.757 "enable_quickack": false, 00:22:20.757 "enable_placement_id": 0, 00:22:20.757 "enable_zerocopy_send_server": true, 00:22:20.757 "enable_zerocopy_send_client": false, 00:22:20.757 "zerocopy_threshold": 0, 00:22:20.757 "tls_version": 0, 00:22:20.757 "enable_ktls": false 00:22:20.757 } 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "vmd", 00:22:20.757 "config": [] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "accel", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "accel_set_options", 00:22:20.757 "params": { 00:22:20.757 "small_cache_size": 128, 00:22:20.757 "large_cache_size": 16, 00:22:20.757 "task_count": 2048, 00:22:20.757 "sequence_count": 2048, 00:22:20.757 "buf_count": 2048 00:22:20.757 } 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "bdev", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "bdev_set_options", 00:22:20.757 "params": { 00:22:20.757 "bdev_io_pool_size": 65535, 00:22:20.757 "bdev_io_cache_size": 256, 00:22:20.757 "bdev_auto_examine": true, 00:22:20.757 "iobuf_small_cache_size": 128, 00:22:20.757 "iobuf_large_cache_size": 16 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "bdev_raid_set_options", 00:22:20.757 "params": { 00:22:20.757 "process_window_size_kb": 1024, 00:22:20.757 "process_max_bandwidth_mb_sec": 0 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "bdev_iscsi_set_options", 00:22:20.757 "params": { 00:22:20.757 "timeout_sec": 30 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "bdev_nvme_set_options", 00:22:20.757 "params": { 00:22:20.757 "action_on_timeout": "none", 00:22:20.757 "timeout_us": 0, 00:22:20.757 "timeout_admin_us": 0, 00:22:20.757 "keep_alive_timeout_ms": 10000, 00:22:20.757 "arbitration_burst": 0, 00:22:20.757 "low_priority_weight": 0, 00:22:20.757 "medium_priority_weight": 0, 00:22:20.757 "high_priority_weight": 0, 00:22:20.757 "nvme_adminq_poll_period_us": 10000, 00:22:20.757 "nvme_ioq_poll_period_us": 0, 00:22:20.757 "io_queue_requests": 0, 00:22:20.757 "delay_cmd_submit": true, 00:22:20.757 "transport_retry_count": 4, 00:22:20.757 "bdev_retry_count": 3, 00:22:20.757 "transport_ack_timeout": 0, 00:22:20.757 "ctrlr_loss_timeout_sec": 0, 00:22:20.757 "reconnect_delay_sec": 0, 00:22:20.757 "fast_io_fail_timeout_sec": 0, 00:22:20.757 "disable_auto_failback": false, 00:22:20.757 "generate_uuids": false, 00:22:20.757 "transport_tos": 0, 00:22:20.757 "nvme_error_stat": false, 00:22:20.757 "rdma_srq_size": 0, 00:22:20.757 "io_path_stat": false, 00:22:20.757 "allow_accel_sequence": false, 00:22:20.757 "rdma_max_cq_size": 0, 00:22:20.757 "rdma_cm_event_timeout_ms": 0, 00:22:20.757 "dhchap_digests": [ 00:22:20.757 "sha256", 00:22:20.757 "sha384", 00:22:20.757 "sha512" 00:22:20.757 ], 00:22:20.757 "dhchap_dhgroups": [ 00:22:20.757 "null", 00:22:20.757 "ffdhe2048", 00:22:20.757 "ffdhe3072", 00:22:20.757 "ffdhe4096", 00:22:20.757 "ffdhe6144", 00:22:20.757 "ffdhe8192" 00:22:20.757 ] 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "bdev_nvme_set_hotplug", 00:22:20.757 "params": { 00:22:20.757 "period_us": 100000, 00:22:20.757 "enable": false 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "bdev_malloc_create", 00:22:20.757 "params": { 00:22:20.757 "name": "malloc0", 00:22:20.757 "num_blocks": 8192, 00:22:20.757 "block_size": 4096, 00:22:20.757 "physical_block_size": 4096, 00:22:20.757 "uuid": "29bee8b3-8a2c-4981-8367-1acb4da7367f", 00:22:20.757 "optimal_io_boundary": 0 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "bdev_wait_for_examine" 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "nbd", 00:22:20.757 "config": [] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "scheduler", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "framework_set_scheduler", 00:22:20.757 "params": { 00:22:20.757 "name": "static" 00:22:20.757 } 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "subsystem": "nvmf", 00:22:20.757 "config": [ 00:22:20.757 { 00:22:20.757 "method": "nvmf_set_config", 00:22:20.757 "params": { 00:22:20.757 "discovery_filter": "match_any", 00:22:20.757 "admin_cmd_passthru": { 00:22:20.757 "identify_ctrlr": false 00:22:20.757 } 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_set_max_subsystems", 00:22:20.757 "params": { 00:22:20.757 "max_subsystems": 1024 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_set_crdt", 00:22:20.757 "params": { 00:22:20.757 "crdt1": 0, 00:22:20.757 "crdt2": 0, 00:22:20.757 "crdt3": 0 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_create_transport", 00:22:20.757 "params": { 00:22:20.757 "trtype": "TCP", 00:22:20.757 "max_queue_depth": 128, 00:22:20.757 "max_io_qpairs_per_ctrlr": 127, 00:22:20.757 "in_capsule_data_size": 4096, 00:22:20.757 "max_io_size": 131072, 00:22:20.757 "io_unit_size": 131072, 00:22:20.757 "max_aq_depth": 128, 00:22:20.757 "num_shared_buffers": 511, 00:22:20.757 "buf_cache_size": 4294967295, 00:22:20.757 "dif_insert_or_strip": false, 00:22:20.757 "zcopy": false, 00:22:20.757 "c2h_success": false, 00:22:20.757 "sock_priority": 0, 00:22:20.757 "abort_timeout_sec": 1, 00:22:20.757 "ack_timeout": 0, 00:22:20.757 "data_wr_pool_size": 0 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_create_subsystem", 00:22:20.757 "params": { 00:22:20.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.757 "allow_any_host": false, 00:22:20.757 "serial_number": "00000000000000000000", 00:22:20.757 "model_number": "SPDK bdev Controller", 00:22:20.757 "max_namespaces": 32, 00:22:20.757 "min_cntlid": 1, 00:22:20.757 "max_cntlid": 65519, 00:22:20.757 "ana_reporting": false 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_subsystem_add_host", 00:22:20.757 "params": { 00:22:20.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.757 "host": "nqn.2016-06.io.spdk:host1", 00:22:20.757 "psk": "key0" 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_subsystem_add_ns", 00:22:20.757 "params": { 00:22:20.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.757 "namespace": { 00:22:20.757 "nsid": 1, 00:22:20.757 "bdev_name": "malloc0", 00:22:20.757 "nguid": "29BEE8B38A2C498183671ACB4DA7367F", 00:22:20.757 "uuid": "29bee8b3-8a2c-4981-8367-1acb4da7367f", 00:22:20.757 "no_auto_visible": false 00:22:20.757 } 00:22:20.757 } 00:22:20.757 }, 00:22:20.757 { 00:22:20.757 "method": "nvmf_subsystem_add_listener", 00:22:20.757 "params": { 00:22:20.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.757 "listen_address": { 00:22:20.757 "trtype": "TCP", 00:22:20.757 "adrfam": "IPv4", 00:22:20.757 "traddr": "10.0.0.2", 00:22:20.757 "trsvcid": "4420" 00:22:20.757 }, 00:22:20.757 "secure_channel": false, 00:22:20.757 "sock_impl": "ssl" 00:22:20.757 } 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 } 00:22:20.757 ] 00:22:20.757 }' 00:22:20.757 12:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:21.016 12:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:21.016 "subsystems": [ 00:22:21.016 { 00:22:21.016 "subsystem": "keyring", 00:22:21.016 "config": [ 00:22:21.016 { 00:22:21.016 "method": "keyring_file_add_key", 00:22:21.016 "params": { 00:22:21.016 "name": "key0", 00:22:21.016 "path": "/tmp/tmp.fSldM0Xo6C" 00:22:21.016 } 00:22:21.016 } 00:22:21.016 ] 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "subsystem": "iobuf", 00:22:21.016 "config": [ 00:22:21.016 { 00:22:21.016 "method": "iobuf_set_options", 00:22:21.016 "params": { 00:22:21.016 "small_pool_count": 8192, 00:22:21.016 "large_pool_count": 1024, 00:22:21.016 "small_bufsize": 8192, 00:22:21.016 "large_bufsize": 135168 00:22:21.016 } 00:22:21.016 } 00:22:21.016 ] 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "subsystem": "sock", 00:22:21.016 "config": [ 00:22:21.016 { 00:22:21.016 "method": "sock_set_default_impl", 00:22:21.016 "params": { 00:22:21.016 "impl_name": "posix" 00:22:21.016 } 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "method": "sock_impl_set_options", 00:22:21.016 "params": { 00:22:21.016 "impl_name": "ssl", 00:22:21.016 "recv_buf_size": 4096, 00:22:21.016 "send_buf_size": 4096, 00:22:21.016 "enable_recv_pipe": true, 00:22:21.016 "enable_quickack": false, 00:22:21.016 "enable_placement_id": 0, 00:22:21.016 "enable_zerocopy_send_server": true, 00:22:21.016 "enable_zerocopy_send_client": false, 00:22:21.016 "zerocopy_threshold": 0, 00:22:21.016 "tls_version": 0, 00:22:21.016 "enable_ktls": false 00:22:21.016 } 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "method": "sock_impl_set_options", 00:22:21.016 "params": { 00:22:21.016 "impl_name": "posix", 00:22:21.016 "recv_buf_size": 2097152, 00:22:21.016 "send_buf_size": 2097152, 00:22:21.016 "enable_recv_pipe": true, 00:22:21.016 "enable_quickack": false, 00:22:21.016 "enable_placement_id": 0, 00:22:21.016 "enable_zerocopy_send_server": true, 00:22:21.016 "enable_zerocopy_send_client": false, 00:22:21.016 "zerocopy_threshold": 0, 00:22:21.016 "tls_version": 0, 00:22:21.016 "enable_ktls": false 00:22:21.016 } 00:22:21.016 } 00:22:21.016 ] 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "subsystem": "vmd", 00:22:21.016 "config": [] 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "subsystem": "accel", 00:22:21.016 "config": [ 00:22:21.016 { 00:22:21.016 "method": "accel_set_options", 00:22:21.016 "params": { 00:22:21.016 "small_cache_size": 128, 00:22:21.016 "large_cache_size": 16, 00:22:21.016 "task_count": 2048, 00:22:21.016 "sequence_count": 2048, 00:22:21.016 "buf_count": 2048 00:22:21.016 } 00:22:21.016 } 00:22:21.016 ] 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "subsystem": "bdev", 00:22:21.016 "config": [ 00:22:21.016 { 00:22:21.016 "method": "bdev_set_options", 00:22:21.016 "params": { 00:22:21.016 "bdev_io_pool_size": 65535, 00:22:21.016 "bdev_io_cache_size": 256, 00:22:21.016 "bdev_auto_examine": true, 00:22:21.016 "iobuf_small_cache_size": 128, 00:22:21.016 "iobuf_large_cache_size": 16 00:22:21.016 } 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "method": "bdev_raid_set_options", 00:22:21.016 "params": { 00:22:21.016 "process_window_size_kb": 1024, 00:22:21.016 "process_max_bandwidth_mb_sec": 0 00:22:21.016 } 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "method": "bdev_iscsi_set_options", 00:22:21.016 "params": { 00:22:21.016 "timeout_sec": 30 00:22:21.016 } 00:22:21.016 }, 00:22:21.016 { 00:22:21.016 "method": "bdev_nvme_set_options", 00:22:21.016 "params": { 00:22:21.016 "action_on_timeout": "none", 00:22:21.016 "timeout_us": 0, 00:22:21.016 "timeout_admin_us": 0, 00:22:21.016 "keep_alive_timeout_ms": 10000, 00:22:21.016 "arbitration_burst": 0, 00:22:21.016 "low_priority_weight": 0, 00:22:21.016 "medium_priority_weight": 0, 00:22:21.016 "high_priority_weight": 0, 00:22:21.016 "nvme_adminq_poll_period_us": 10000, 00:22:21.016 "nvme_ioq_poll_period_us": 0, 00:22:21.016 "io_queue_requests": 512, 00:22:21.016 "delay_cmd_submit": true, 00:22:21.016 "transport_retry_count": 4, 00:22:21.016 "bdev_retry_count": 3, 00:22:21.016 "transport_ack_timeout": 0, 00:22:21.017 "ctrlr_loss_timeout_sec": 0, 00:22:21.017 "reconnect_delay_sec": 0, 00:22:21.017 "fast_io_fail_timeout_sec": 0, 00:22:21.017 "disable_auto_failback": false, 00:22:21.017 "generate_uuids": false, 00:22:21.017 "transport_tos": 0, 00:22:21.017 "nvme_error_stat": false, 00:22:21.017 "rdma_srq_size": 0, 00:22:21.017 "io_path_stat": false, 00:22:21.017 "allow_accel_sequence": false, 00:22:21.017 "rdma_max_cq_size": 0, 00:22:21.017 "rdma_cm_event_timeout_ms": 0, 00:22:21.017 "dhchap_digests": [ 00:22:21.017 "sha256", 00:22:21.017 "sha384", 00:22:21.017 "sha512" 00:22:21.017 ], 00:22:21.017 "dhchap_dhgroups": [ 00:22:21.017 "null", 00:22:21.017 "ffdhe2048", 00:22:21.017 "ffdhe3072", 00:22:21.017 "ffdhe4096", 00:22:21.017 "ffdhe6144", 00:22:21.017 "ffdhe8192" 00:22:21.017 ] 00:22:21.017 } 00:22:21.017 }, 00:22:21.017 { 00:22:21.017 "method": "bdev_nvme_attach_controller", 00:22:21.017 "params": { 00:22:21.017 "name": "nvme0", 00:22:21.017 "trtype": "TCP", 00:22:21.017 "adrfam": "IPv4", 00:22:21.017 "traddr": "10.0.0.2", 00:22:21.017 "trsvcid": "4420", 00:22:21.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.017 "prchk_reftag": false, 00:22:21.017 "prchk_guard": false, 00:22:21.017 "ctrlr_loss_timeout_sec": 0, 00:22:21.017 "reconnect_delay_sec": 0, 00:22:21.017 "fast_io_fail_timeout_sec": 0, 00:22:21.017 "psk": "key0", 00:22:21.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.017 "hdgst": false, 00:22:21.017 "ddgst": false 00:22:21.017 } 00:22:21.017 }, 00:22:21.017 { 00:22:21.017 "method": "bdev_nvme_set_hotplug", 00:22:21.017 "params": { 00:22:21.017 "period_us": 100000, 00:22:21.017 "enable": false 00:22:21.017 } 00:22:21.017 }, 00:22:21.017 { 00:22:21.017 "method": "bdev_enable_histogram", 00:22:21.017 "params": { 00:22:21.017 "name": "nvme0n1", 00:22:21.017 "enable": true 00:22:21.017 } 00:22:21.017 }, 00:22:21.017 { 00:22:21.017 "method": "bdev_wait_for_examine" 00:22:21.017 } 00:22:21.017 ] 00:22:21.017 }, 00:22:21.017 { 00:22:21.017 "subsystem": "nbd", 00:22:21.017 "config": [] 00:22:21.017 } 00:22:21.017 ] 00:22:21.017 }' 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1032645 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1032645 ']' 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1032645 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032645 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032645' 00:22:21.017 killing process with pid 1032645 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1032645 00:22:21.017 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.017 00:22:21.017 Latency(us) 00:22:21.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.017 =================================================================================================================== 00:22:21.017 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.017 12:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1032645 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1032619 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1032619 ']' 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1032619 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032619 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032619' 00:22:21.275 killing process with pid 1032619 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1032619 00:22:21.275 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1032619 00:22:21.533 12:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:21.534 12:17:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.534 12:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:21.534 "subsystems": [ 00:22:21.534 { 00:22:21.534 "subsystem": "keyring", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "keyring_file_add_key", 00:22:21.534 "params": { 00:22:21.534 "name": "key0", 00:22:21.534 "path": "/tmp/tmp.fSldM0Xo6C" 00:22:21.534 } 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "iobuf", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "iobuf_set_options", 00:22:21.534 "params": { 00:22:21.534 "small_pool_count": 8192, 00:22:21.534 "large_pool_count": 1024, 00:22:21.534 "small_bufsize": 8192, 00:22:21.534 "large_bufsize": 135168 00:22:21.534 } 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "sock", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "sock_set_default_impl", 00:22:21.534 "params": { 00:22:21.534 "impl_name": "posix" 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "sock_impl_set_options", 00:22:21.534 "params": { 00:22:21.534 "impl_name": "ssl", 00:22:21.534 "recv_buf_size": 4096, 00:22:21.534 "send_buf_size": 4096, 00:22:21.534 "enable_recv_pipe": true, 00:22:21.534 "enable_quickack": false, 00:22:21.534 "enable_placement_id": 0, 00:22:21.534 "enable_zerocopy_send_server": true, 00:22:21.534 "enable_zerocopy_send_client": false, 00:22:21.534 "zerocopy_threshold": 0, 00:22:21.534 "tls_version": 0, 00:22:21.534 "enable_ktls": false 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "sock_impl_set_options", 00:22:21.534 "params": { 00:22:21.534 "impl_name": "posix", 00:22:21.534 "recv_buf_size": 2097152, 00:22:21.534 "send_buf_size": 2097152, 00:22:21.534 "enable_recv_pipe": true, 00:22:21.534 "enable_quickack": false, 00:22:21.534 "enable_placement_id": 0, 00:22:21.534 "enable_zerocopy_send_server": true, 00:22:21.534 "enable_zerocopy_send_client": false, 00:22:21.534 "zerocopy_threshold": 0, 00:22:21.534 "tls_version": 0, 00:22:21.534 "enable_ktls": false 00:22:21.534 } 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "vmd", 00:22:21.534 "config": [] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "accel", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "accel_set_options", 00:22:21.534 "params": { 00:22:21.534 "small_cache_size": 128, 00:22:21.534 "large_cache_size": 16, 00:22:21.534 "task_count": 2048, 00:22:21.534 "sequence_count": 2048, 00:22:21.534 "buf_count": 2048 00:22:21.534 } 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "bdev", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "bdev_set_options", 00:22:21.534 "params": { 00:22:21.534 "bdev_io_pool_size": 65535, 00:22:21.534 "bdev_io_cache_size": 256, 00:22:21.534 "bdev_auto_examine": true, 00:22:21.534 "iobuf_small_cache_size": 128, 00:22:21.534 "iobuf_large_cache_size": 16 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "bdev_raid_set_options", 00:22:21.534 "params": { 00:22:21.534 "process_window_size_kb": 1024, 00:22:21.534 "process_max_bandwidth_mb_sec": 0 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "bdev_iscsi_set_options", 00:22:21.534 "params": { 00:22:21.534 "timeout_sec": 30 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "bdev_nvme_set_options", 00:22:21.534 "params": { 00:22:21.534 "action_on_timeout": "none", 00:22:21.534 "timeout_us": 0, 00:22:21.534 "timeout_admin_us": 0, 00:22:21.534 "keep_alive_timeout_ms": 10000, 00:22:21.534 "arbitration_burst": 0, 00:22:21.534 "low_priority_weight": 0, 00:22:21.534 "medium_priority_weight": 0, 00:22:21.534 "high_priority_weight": 0, 00:22:21.534 "nvme_adminq_poll_period_us": 10000, 00:22:21.534 "nvme_ioq_poll_period_us": 0, 00:22:21.534 "io_queue_requests": 0, 00:22:21.534 "delay_cmd_submit": true, 00:22:21.534 "transport_retry_count": 4, 00:22:21.534 "bdev_retry_count": 3, 00:22:21.534 "transport_ack_timeout": 0, 00:22:21.534 "ctrlr_loss_timeout_sec": 0, 00:22:21.534 "reconnect_delay_sec": 0, 00:22:21.534 "fast_io_fail_timeout_sec": 0, 00:22:21.534 "disable_auto_failback": false, 00:22:21.534 "generate_uuids": false, 00:22:21.534 "transport_tos": 0, 00:22:21.534 "nvme_error_stat": false, 00:22:21.534 "rdma_srq_size": 0, 00:22:21.534 "io_path_stat": false, 00:22:21.534 "allow_accel_sequence": false, 00:22:21.534 "rdma_max_cq_size": 0, 00:22:21.534 "rdma_cm_event_timeout_ms": 0, 00:22:21.534 "dhchap_digests": [ 00:22:21.534 "sha256", 00:22:21.534 "sha384", 00:22:21.534 "sha512" 00:22:21.534 ], 00:22:21.534 "dhchap_dhgroups": [ 00:22:21.534 "null", 00:22:21.534 "ffdhe2048", 00:22:21.534 "ffdhe3072", 00:22:21.534 "ffdhe4096", 00:22:21.534 "ffdhe6144", 00:22:21.534 "ffdhe8192" 00:22:21.534 ] 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "bdev_nvme_set_hotplug", 00:22:21.534 "params": { 00:22:21.534 "period_us": 100000, 00:22:21.534 "enable": false 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "bdev_malloc_create", 00:22:21.534 "params": { 00:22:21.534 "name": "malloc0", 00:22:21.534 "num_blocks": 8192, 00:22:21.534 "block_size": 4096, 00:22:21.534 "physical_block_size": 4096, 00:22:21.534 "uuid": "29bee8b3-8a2c-4981-8367-1acb4da7367f", 00:22:21.534 "optimal_io_boundary": 0 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "bdev_wait_for_examine" 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "nbd", 00:22:21.534 "config": [] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "scheduler", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "framework_set_scheduler", 00:22:21.534 "params": { 00:22:21.534 "name": "static" 00:22:21.534 } 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "subsystem": "nvmf", 00:22:21.534 "config": [ 00:22:21.534 { 00:22:21.534 "method": "nvmf_set_config", 00:22:21.534 "params": { 00:22:21.534 "discovery_filter": "match_any", 00:22:21.534 "admin_cmd_passthru": { 00:22:21.534 "identify_ctrlr": false 00:22:21.534 } 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_set_max_subsystems", 00:22:21.534 "params": { 00:22:21.534 "max_subsystems": 1024 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_set_crdt", 00:22:21.534 "params": { 00:22:21.534 "crdt1": 0, 00:22:21.534 "crdt2": 0, 00:22:21.534 "crdt3": 0 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_create_transport", 00:22:21.534 "params": { 00:22:21.534 "trtype": "TCP", 00:22:21.534 "max_queue_depth": 128, 00:22:21.534 "max_io_qpairs_per_ctrlr": 127, 00:22:21.534 "in_capsule_data_size": 4096, 00:22:21.534 "max_io_size": 131072, 00:22:21.534 "io_unit_size": 131072, 00:22:21.534 "max_aq_depth": 128, 00:22:21.534 "num_shared_buffers": 511, 00:22:21.534 "buf_cache_size": 4294967295, 00:22:21.534 "dif_insert_or_strip": false, 00:22:21.534 "zcopy": false, 00:22:21.534 "c2h_success": false, 00:22:21.534 "sock_priority": 0, 00:22:21.534 "abort_timeout_sec": 1, 00:22:21.534 "ack_timeout": 0, 00:22:21.534 "data_wr_pool_size": 0 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_create_subsystem", 00:22:21.534 "params": { 00:22:21.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.534 "allow_any_host": false, 00:22:21.534 "serial_number": "00000000000000000000", 00:22:21.534 "model_number": "SPDK bdev Controller", 00:22:21.534 "max_namespaces": 32, 00:22:21.534 "min_cntlid": 1, 00:22:21.534 "max_cntlid": 65519, 00:22:21.534 "ana_reporting": false 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_subsystem_add_host", 00:22:21.534 "params": { 00:22:21.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.534 "host": "nqn.2016-06.io.spdk:host1", 00:22:21.534 "psk": "key0" 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_subsystem_add_ns", 00:22:21.534 "params": { 00:22:21.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.534 "namespace": { 00:22:21.534 "nsid": 1, 00:22:21.534 "bdev_name": "malloc0", 00:22:21.534 "nguid": "29BEE8B38A2C498183671ACB4DA7367F", 00:22:21.534 "uuid": "29bee8b3-8a2c-4981-8367-1acb4da7367f", 00:22:21.534 "no_auto_visible": false 00:22:21.534 } 00:22:21.534 } 00:22:21.534 }, 00:22:21.534 { 00:22:21.534 "method": "nvmf_subsystem_add_listener", 00:22:21.534 "params": { 00:22:21.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.534 "listen_address": { 00:22:21.534 "trtype": "TCP", 00:22:21.534 "adrfam": "IPv4", 00:22:21.534 "traddr": "10.0.0.2", 00:22:21.534 "trsvcid": "4420" 00:22:21.534 }, 00:22:21.534 "secure_channel": false, 00:22:21.534 "sock_impl": "ssl" 00:22:21.534 } 00:22:21.534 } 00:22:21.534 ] 00:22:21.534 } 00:22:21.535 ] 00:22:21.535 }' 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1033054 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1033054 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1033054 ']' 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.535 12:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.535 [2024-07-22 12:17:29.353247] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:21.535 [2024-07-22 12:17:29.353340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.535 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.535 [2024-07-22 12:17:29.388641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.535 [2024-07-22 12:17:29.419644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.794 [2024-07-22 12:17:29.510560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.794 [2024-07-22 12:17:29.510639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.794 [2024-07-22 12:17:29.510667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.794 [2024-07-22 12:17:29.510694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.794 [2024-07-22 12:17:29.510705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.795 [2024-07-22 12:17:29.510784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.054 [2024-07-22 12:17:29.755646] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.054 [2024-07-22 12:17:29.796424] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.054 [2024-07-22 12:17:29.796687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1033205 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1033205 /var/tmp/bdevperf.sock 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1033205 ']' 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.621 12:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:22.621 "subsystems": [ 00:22:22.621 { 00:22:22.621 "subsystem": "keyring", 00:22:22.621 "config": [ 00:22:22.621 { 00:22:22.621 "method": "keyring_file_add_key", 00:22:22.621 "params": { 00:22:22.621 "name": "key0", 00:22:22.621 "path": "/tmp/tmp.fSldM0Xo6C" 00:22:22.621 } 00:22:22.621 } 00:22:22.621 ] 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "subsystem": "iobuf", 00:22:22.621 "config": [ 00:22:22.621 { 00:22:22.621 "method": "iobuf_set_options", 00:22:22.621 "params": { 00:22:22.621 "small_pool_count": 8192, 00:22:22.621 "large_pool_count": 1024, 00:22:22.621 "small_bufsize": 8192, 00:22:22.621 "large_bufsize": 135168 00:22:22.621 } 00:22:22.621 } 00:22:22.621 ] 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "subsystem": "sock", 00:22:22.621 "config": [ 00:22:22.621 { 00:22:22.621 "method": "sock_set_default_impl", 00:22:22.621 "params": { 00:22:22.621 "impl_name": "posix" 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "sock_impl_set_options", 00:22:22.621 "params": { 00:22:22.621 "impl_name": "ssl", 00:22:22.621 "recv_buf_size": 4096, 00:22:22.621 "send_buf_size": 4096, 00:22:22.621 "enable_recv_pipe": true, 00:22:22.621 "enable_quickack": false, 00:22:22.621 "enable_placement_id": 0, 00:22:22.621 "enable_zerocopy_send_server": true, 00:22:22.621 "enable_zerocopy_send_client": false, 00:22:22.621 "zerocopy_threshold": 0, 00:22:22.621 "tls_version": 0, 00:22:22.621 "enable_ktls": false 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "sock_impl_set_options", 00:22:22.621 "params": { 00:22:22.621 "impl_name": "posix", 00:22:22.621 "recv_buf_size": 2097152, 00:22:22.621 "send_buf_size": 2097152, 00:22:22.621 "enable_recv_pipe": true, 00:22:22.621 "enable_quickack": false, 00:22:22.621 "enable_placement_id": 0, 00:22:22.621 "enable_zerocopy_send_server": true, 00:22:22.621 "enable_zerocopy_send_client": false, 00:22:22.621 "zerocopy_threshold": 0, 00:22:22.621 "tls_version": 0, 00:22:22.621 "enable_ktls": false 00:22:22.621 } 00:22:22.621 } 00:22:22.621 ] 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "subsystem": "vmd", 00:22:22.621 "config": [] 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "subsystem": "accel", 00:22:22.621 "config": [ 00:22:22.621 { 00:22:22.621 "method": "accel_set_options", 00:22:22.621 "params": { 00:22:22.621 "small_cache_size": 128, 00:22:22.621 "large_cache_size": 16, 00:22:22.621 "task_count": 2048, 00:22:22.621 "sequence_count": 2048, 00:22:22.621 "buf_count": 2048 00:22:22.621 } 00:22:22.621 } 00:22:22.621 ] 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "subsystem": "bdev", 00:22:22.621 "config": [ 00:22:22.621 { 00:22:22.621 "method": "bdev_set_options", 00:22:22.621 "params": { 00:22:22.621 "bdev_io_pool_size": 65535, 00:22:22.621 "bdev_io_cache_size": 256, 00:22:22.621 "bdev_auto_examine": true, 00:22:22.621 "iobuf_small_cache_size": 128, 00:22:22.621 "iobuf_large_cache_size": 16 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "bdev_raid_set_options", 00:22:22.621 "params": { 00:22:22.621 "process_window_size_kb": 1024, 00:22:22.621 "process_max_bandwidth_mb_sec": 0 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "bdev_iscsi_set_options", 00:22:22.621 "params": { 00:22:22.621 "timeout_sec": 30 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "bdev_nvme_set_options", 00:22:22.621 "params": { 00:22:22.621 "action_on_timeout": "none", 00:22:22.621 "timeout_us": 0, 00:22:22.621 "timeout_admin_us": 0, 00:22:22.621 "keep_alive_timeout_ms": 10000, 00:22:22.621 "arbitration_burst": 0, 00:22:22.621 "low_priority_weight": 0, 00:22:22.621 "medium_priority_weight": 0, 00:22:22.621 "high_priority_weight": 0, 00:22:22.621 "nvme_adminq_poll_period_us": 10000, 00:22:22.621 "nvme_ioq_poll_period_us": 0, 00:22:22.621 "io_queue_requests": 512, 00:22:22.621 "delay_cmd_submit": true, 00:22:22.621 "transport_retry_count": 4, 00:22:22.621 "bdev_retry_count": 3, 00:22:22.621 "transport_ack_timeout": 0, 00:22:22.621 "ctrlr_loss_timeout_sec": 0, 00:22:22.621 "reconnect_delay_sec": 0, 00:22:22.621 "fast_io_fail_timeout_sec": 0, 00:22:22.621 "disable_auto_failback": false, 00:22:22.621 "generate_uuids": false, 00:22:22.621 "transport_tos": 0, 00:22:22.621 "nvme_error_stat": false, 00:22:22.621 "rdma_srq_size": 0, 00:22:22.621 "io_path_stat": false, 00:22:22.621 "allow_accel_sequence": false, 00:22:22.621 "rdma_max_cq_size": 0, 00:22:22.621 "rdma_cm_event_timeout_ms": 0, 00:22:22.621 "dhchap_digests": [ 00:22:22.621 "sha256", 00:22:22.621 "sha384", 00:22:22.621 "sha512" 00:22:22.621 ], 00:22:22.621 "dhchap_dhgroups": [ 00:22:22.621 "null", 00:22:22.621 "ffdhe2048", 00:22:22.621 "ffdhe3072", 00:22:22.621 "ffdhe4096", 00:22:22.621 "ffdhe6144", 00:22:22.621 "ffdhe8192" 00:22:22.621 ] 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "bdev_nvme_attach_controller", 00:22:22.621 "params": { 00:22:22.621 "name": "nvme0", 00:22:22.621 "trtype": "TCP", 00:22:22.621 "adrfam": "IPv4", 00:22:22.621 "traddr": "10.0.0.2", 00:22:22.621 "trsvcid": "4420", 00:22:22.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.621 "prchk_reftag": false, 00:22:22.621 "prchk_guard": false, 00:22:22.621 "ctrlr_loss_timeout_sec": 0, 00:22:22.621 "reconnect_delay_sec": 0, 00:22:22.621 "fast_io_fail_timeout_sec": 0, 00:22:22.621 "psk": "key0", 00:22:22.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.621 "hdgst": false, 00:22:22.621 "ddgst": false 00:22:22.621 } 00:22:22.621 }, 00:22:22.621 { 00:22:22.621 "method": "bdev_nvme_set_hotplug", 00:22:22.622 "params": { 00:22:22.622 "period_us": 100000, 00:22:22.622 "enable": false 00:22:22.622 } 00:22:22.622 }, 00:22:22.622 { 00:22:22.622 "method": "bdev_enable_histogram", 00:22:22.622 "params": { 00:22:22.622 "name": "nvme0n1", 00:22:22.622 "enable": true 00:22:22.622 } 00:22:22.622 }, 00:22:22.622 { 00:22:22.622 "method": "bdev_wait_for_examine" 00:22:22.622 } 00:22:22.622 ] 00:22:22.622 }, 00:22:22.622 { 00:22:22.622 "subsystem": "nbd", 00:22:22.622 "config": [] 00:22:22.622 } 00:22:22.622 ] 00:22:22.622 }' 00:22:22.622 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.622 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.622 12:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.622 [2024-07-22 12:17:30.432984] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:22.622 [2024-07-22 12:17:30.433076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033205 ] 00:22:22.622 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.622 [2024-07-22 12:17:30.464625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.622 [2024-07-22 12:17:30.495754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.880 [2024-07-22 12:17:30.588741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.880 [2024-07-22 12:17:30.767207] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.816 12:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.816 12:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:23.816 12:17:31 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.816 12:17:31 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:23.816 12:17:31 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.816 12:17:31 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.074 Running I/O for 1 seconds... 00:22:25.027 00:22:25.027 Latency(us) 00:22:25.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.027 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:25.027 Verification LBA range: start 0x0 length 0x2000 00:22:25.027 nvme0n1 : 1.03 3261.60 12.74 0.00 0.00 38640.07 11116.85 55535.69 00:22:25.027 =================================================================================================================== 00:22:25.027 Total : 3261.60 12.74 0.00 0.00 38640.07 11116.85 55535.69 00:22:25.027 0 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:25.027 nvmf_trace.0 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1033205 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1033205 ']' 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1033205 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1033205 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1033205' 00:22:25.027 killing process with pid 1033205 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1033205 00:22:25.027 Received shutdown signal, test time was about 1.000000 seconds 00:22:25.027 00:22:25.027 Latency(us) 00:22:25.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.027 =================================================================================================================== 00:22:25.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.027 12:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1033205 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.284 rmmod nvme_tcp 00:22:25.284 rmmod nvme_fabrics 00:22:25.284 rmmod nvme_keyring 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1033054 ']' 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1033054 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1033054 ']' 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1033054 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:25.284 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1033054 00:22:25.541 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:25.541 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:25.541 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1033054' 00:22:25.541 killing process with pid 1033054 00:22:25.541 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1033054 00:22:25.541 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1033054 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.799 12:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.695 12:17:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:27.695 12:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uQSNRIhx93 /tmp/tmp.tKGWx3VdOD /tmp/tmp.fSldM0Xo6C 00:22:27.695 00:22:27.695 real 1m19.040s 00:22:27.695 user 2m1.625s 00:22:27.695 sys 0m26.678s 00:22:27.695 12:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.695 12:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.695 ************************************ 00:22:27.695 END TEST nvmf_tls 00:22:27.695 ************************************ 00:22:27.695 12:17:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:27.695 12:17:35 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:27.695 12:17:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:27.695 12:17:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.695 12:17:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.695 ************************************ 00:22:27.695 START TEST nvmf_fips 00:22:27.695 ************************************ 00:22:27.695 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:27.695 * Looking for test storage... 00:22:27.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:27.695 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.695 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:27.954 Error setting digest 00:22:27.954 00B244B0BA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:27.954 00B244B0BA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.954 12:17:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.851 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:29.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:29.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:29.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:29.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.852 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:22:30.111 00:22:30.111 --- 10.0.0.2 ping statistics --- 00:22:30.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.111 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:30.111 00:22:30.111 --- 10.0.0.1 ping statistics --- 00:22:30.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.111 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1035438 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1035438 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1035438 ']' 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.111 12:17:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:30.111 [2024-07-22 12:17:37.987054] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:30.111 [2024-07-22 12:17:37.987141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.111 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.111 [2024-07-22 12:17:38.028274] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:30.368 [2024-07-22 12:17:38.057408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.368 [2024-07-22 12:17:38.146426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.368 [2024-07-22 12:17:38.146483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.368 [2024-07-22 12:17:38.146496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.368 [2024-07-22 12:17:38.146508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.369 [2024-07-22 12:17:38.146517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.369 [2024-07-22 12:17:38.146544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.369 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:30.628 [2024-07-22 12:17:38.505543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.628 [2024-07-22 12:17:38.521540] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.628 [2024-07-22 12:17:38.521786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.628 [2024-07-22 12:17:38.553261] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:30.628 malloc0 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1035592 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1035592 /var/tmp/bdevperf.sock 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1035592 ']' 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.887 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:30.887 [2024-07-22 12:17:38.648128] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:22:30.887 [2024-07-22 12:17:38.648223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035592 ] 00:22:30.887 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.887 [2024-07-22 12:17:38.679838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:30.887 [2024-07-22 12:17:38.707228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.887 [2024-07-22 12:17:38.792980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.145 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.145 12:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:31.145 12:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.423 [2024-07-22 12:17:39.181697] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.423 [2024-07-22 12:17:39.181853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:31.423 TLSTESTn1 00:22:31.423 12:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.690 Running I/O for 10 seconds... 00:22:41.662 00:22:41.662 Latency(us) 00:22:41.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.662 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.662 Verification LBA range: start 0x0 length 0x2000 00:22:41.662 TLSTESTn1 : 10.03 3408.65 13.32 0.00 0.00 37464.94 9709.04 52817.16 00:22:41.662 =================================================================================================================== 00:22:41.662 Total : 3408.65 13.32 0.00 0.00 37464.94 9709.04 52817.16 00:22:41.662 0 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:41.662 nvmf_trace.0 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1035592 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1035592 ']' 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1035592 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035592 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035592' 00:22:41.662 killing process with pid 1035592 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1035592 00:22:41.662 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.662 00:22:41.662 Latency(us) 00:22:41.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.662 =================================================================================================================== 00:22:41.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.662 [2024-07-22 12:17:49.541161] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:41.662 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1035592 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.921 rmmod nvme_tcp 00:22:41.921 rmmod nvme_fabrics 00:22:41.921 rmmod nvme_keyring 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1035438 ']' 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1035438 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1035438 ']' 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1035438 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035438 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.921 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035438' 00:22:41.921 killing process with pid 1035438 00:22:41.922 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1035438 00:22:41.922 [2024-07-22 12:17:49.836349] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:41.922 12:17:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1035438 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.181 12:17:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:44.715 00:22:44.715 real 0m16.572s 00:22:44.715 user 0m20.769s 00:22:44.715 sys 0m6.093s 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.715 ************************************ 00:22:44.715 END TEST nvmf_fips 00:22:44.715 ************************************ 00:22:44.715 12:17:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.715 12:17:52 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:22:44.715 12:17:52 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:44.715 12:17:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.715 12:17:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.715 12:17:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.715 ************************************ 00:22:44.715 START TEST nvmf_fuzz 00:22:44.715 ************************************ 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:44.715 * Looking for test storage... 00:22:44.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.715 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.716 12:17:52 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.618 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.618 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.618 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.618 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:22:46.618 00:22:46.618 --- 10.0.0.2 ping statistics --- 00:22:46.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.618 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:22:46.618 00:22:46.618 --- 10.0.0.1 ping statistics --- 00:22:46.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.618 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1038836 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1038836 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1038836 ']' 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.618 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.875 Malloc0 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:46.875 12:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:18.944 Fuzzing completed. Shutting down the fuzz application 00:23:18.944 00:23:18.944 Dumping successful admin opcodes: 00:23:18.944 8, 9, 10, 24, 00:23:18.944 Dumping successful io opcodes: 00:23:18.944 0, 9, 00:23:18.944 NS: 0x200003aeff00 I/O qp, Total commands completed: 447513, total successful commands: 2599, random_seed: 58097280 00:23:18.944 NS: 0x200003aeff00 admin qp, Total commands completed: 56080, total successful commands: 445, random_seed: 1525763136 00:23:18.944 12:18:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:18.944 Fuzzing completed. Shutting down the fuzz application 00:23:18.944 00:23:18.944 Dumping successful admin opcodes: 00:23:18.944 24, 00:23:18.944 Dumping successful io opcodes: 00:23:18.944 00:23:18.944 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2505601148 00:23:18.944 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2505715385 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.944 rmmod nvme_tcp 00:23:18.944 rmmod nvme_fabrics 00:23:18.944 rmmod nvme_keyring 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1038836 ']' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1038836 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1038836 ']' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1038836 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1038836 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1038836' 00:23:18.944 killing process with pid 1038836 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1038836 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1038836 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.944 12:18:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.546 12:18:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.546 12:18:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:21.546 00:23:21.546 real 0m36.708s 00:23:21.546 user 0m50.676s 00:23:21.547 sys 0m15.273s 00:23:21.547 12:18:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:21.547 12:18:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:21.547 ************************************ 00:23:21.547 END TEST nvmf_fuzz 00:23:21.547 ************************************ 00:23:21.547 12:18:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:21.547 12:18:28 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:21.547 12:18:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:21.547 12:18:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.547 12:18:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.547 ************************************ 00:23:21.547 START TEST nvmf_multiconnection 00:23:21.547 ************************************ 00:23:21.547 12:18:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:21.547 * Looking for test storage... 00:23:21.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.547 12:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:23.449 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:23.449 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:23.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:23.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.449 12:18:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:23:23.449 00:23:23.449 --- 10.0.0.2 ping statistics --- 00:23:23.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.449 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:23:23.449 00:23:23.449 --- 10.0.0.1 ping statistics --- 00:23:23.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.449 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1045059 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1045059 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1045059 ']' 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.449 [2024-07-22 12:18:31.085854] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:23:23.449 [2024-07-22 12:18:31.085955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.449 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.449 [2024-07-22 12:18:31.123444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:23.449 [2024-07-22 12:18:31.155690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.449 [2024-07-22 12:18:31.247996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.449 [2024-07-22 12:18:31.248055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.449 [2024-07-22 12:18:31.248079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.449 [2024-07-22 12:18:31.248092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.449 [2024-07-22 12:18:31.248104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.449 [2024-07-22 12:18:31.248194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.449 [2024-07-22 12:18:31.248262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.449 [2024-07-22 12:18:31.248362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.449 [2024-07-22 12:18:31.248364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.449 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 [2024-07-22 12:18:31.405584] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 Malloc1 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 [2024-07-22 12:18:31.462737] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 Malloc2 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 Malloc3 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 Malloc4 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.706 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:23.707 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.707 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.707 Malloc5 00:23:23.707 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.707 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:23.707 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.707 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 Malloc6 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 Malloc7 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 Malloc8 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 Malloc9 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 Malloc10 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.964 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.221 Malloc11 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.221 12:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:24.788 12:18:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:24.788 12:18:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:24.788 12:18:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:24.788 12:18:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:24.788 12:18:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.310 12:18:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:27.567 12:18:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:27.567 12:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:27.567 12:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:27.567 12:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:27.567 12:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:29.485 12:18:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:30.047 12:18:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:30.047 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:30.047 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:30.047 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:30.047 12:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.586 12:18:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:32.844 12:18:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:32.844 12:18:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:32.844 12:18:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:32.844 12:18:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:32.844 12:18:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:34.742 12:18:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:34.742 12:18:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:34.742 12:18:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:34.999 12:18:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:34.999 12:18:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.999 12:18:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:34.999 12:18:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.999 12:18:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:35.564 12:18:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:35.565 12:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:35.565 12:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:35.565 12:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:35.565 12:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:37.463 12:18:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:38.434 12:18:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:38.434 12:18:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:38.434 12:18:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:38.434 12:18:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:38.434 12:18:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.338 12:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:41.272 12:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:41.272 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:41.272 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:41.272 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:41.272 12:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:43.173 12:18:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:44.110 12:18:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:44.110 12:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:44.110 12:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:44.110 12:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:44.110 12:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.007 12:18:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:46.941 12:18:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:46.941 12:18:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:46.941 12:18:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:46.941 12:18:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:46.941 12:18:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:48.841 12:18:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.098 12:18:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:50.027 12:18:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:50.027 12:18:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.027 12:18:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:50.027 12:18:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:50.027 12:18:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.933 12:18:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:52.870 12:19:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:52.870 12:19:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.870 12:19:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.870 12:19:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:52.870 12:19:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:55.401 12:19:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:55.401 [global] 00:23:55.401 thread=1 00:23:55.401 invalidate=1 00:23:55.401 rw=read 00:23:55.401 time_based=1 00:23:55.401 runtime=10 00:23:55.401 ioengine=libaio 00:23:55.401 direct=1 00:23:55.401 bs=262144 00:23:55.401 iodepth=64 00:23:55.401 norandommap=1 00:23:55.401 numjobs=1 00:23:55.401 00:23:55.401 [job0] 00:23:55.401 filename=/dev/nvme0n1 00:23:55.401 [job1] 00:23:55.401 filename=/dev/nvme10n1 00:23:55.401 [job2] 00:23:55.401 filename=/dev/nvme1n1 00:23:55.401 [job3] 00:23:55.401 filename=/dev/nvme2n1 00:23:55.401 [job4] 00:23:55.401 filename=/dev/nvme3n1 00:23:55.401 [job5] 00:23:55.401 filename=/dev/nvme4n1 00:23:55.401 [job6] 00:23:55.401 filename=/dev/nvme5n1 00:23:55.401 [job7] 00:23:55.401 filename=/dev/nvme6n1 00:23:55.401 [job8] 00:23:55.401 filename=/dev/nvme7n1 00:23:55.401 [job9] 00:23:55.401 filename=/dev/nvme8n1 00:23:55.401 [job10] 00:23:55.401 filename=/dev/nvme9n1 00:23:55.401 Could not set queue depth (nvme0n1) 00:23:55.401 Could not set queue depth (nvme10n1) 00:23:55.401 Could not set queue depth (nvme1n1) 00:23:55.401 Could not set queue depth (nvme2n1) 00:23:55.401 Could not set queue depth (nvme3n1) 00:23:55.401 Could not set queue depth (nvme4n1) 00:23:55.401 Could not set queue depth (nvme5n1) 00:23:55.401 Could not set queue depth (nvme6n1) 00:23:55.401 Could not set queue depth (nvme7n1) 00:23:55.401 Could not set queue depth (nvme8n1) 00:23:55.401 Could not set queue depth (nvme9n1) 00:23:55.401 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.401 fio-3.35 00:23:55.401 Starting 11 threads 00:24:07.636 00:24:07.636 job0: (groupid=0, jobs=1): err= 0: pid=1049320: Mon Jul 22 12:19:13 2024 00:24:07.636 read: IOPS=524, BW=131MiB/s (138MB/s)(1325MiB/10102msec) 00:24:07.636 slat (usec): min=9, max=126244, avg=1379.88, stdev=6058.78 00:24:07.636 clat (usec): min=813, max=287619, avg=120510.49, stdev=55374.34 00:24:07.636 lat (usec): min=834, max=312585, avg=121890.37, stdev=56342.49 00:24:07.636 clat percentiles (msec): 00:24:07.636 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 69], 00:24:07.636 | 30.00th=[ 91], 40.00th=[ 105], 50.00th=[ 130], 60.00th=[ 148], 00:24:07.636 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 201], 00:24:07.636 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 288], 99.95th=[ 288], 00:24:07.636 | 99.99th=[ 288] 00:24:07.636 bw ( KiB/s): min=80384, max=244736, per=6.75%, avg=134041.60, stdev=43591.94, samples=20 00:24:07.636 iops : min= 314, max= 956, avg=523.60, stdev=170.28, samples=20 00:24:07.636 lat (usec) : 1000=0.11% 00:24:07.636 lat (msec) : 4=0.62%, 10=0.91%, 20=1.96%, 50=9.98%, 100=23.06% 00:24:07.636 lat (msec) : 250=62.88%, 500=0.47% 00:24:07.636 cpu : usr=0.34%, sys=1.57%, ctx=1276, majf=0, minf=4097 00:24:07.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:07.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.636 issued rwts: total=5299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.636 job1: (groupid=0, jobs=1): err= 0: pid=1049321: Mon Jul 22 12:19:13 2024 00:24:07.636 read: IOPS=633, BW=158MiB/s (166MB/s)(1600MiB/10108msec) 00:24:07.636 slat (usec): min=9, max=101214, avg=1083.44, stdev=4179.64 00:24:07.636 clat (msec): min=2, max=258, avg=99.93, stdev=43.19 00:24:07.636 lat (msec): min=2, max=343, avg=101.01, stdev=43.79 00:24:07.636 clat percentiles (msec): 00:24:07.636 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 74], 00:24:07.636 | 30.00th=[ 87], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 108], 00:24:07.636 | 70.00th=[ 116], 80.00th=[ 129], 90.00th=[ 153], 95.00th=[ 171], 00:24:07.636 | 99.00th=[ 220], 99.50th=[ 243], 99.90th=[ 251], 99.95th=[ 257], 00:24:07.636 | 99.99th=[ 259] 00:24:07.636 bw ( KiB/s): min=108544, max=245248, per=8.17%, avg=162216.90, stdev=37124.00, samples=20 00:24:07.636 iops : min= 424, max= 958, avg=633.65, stdev=145.02, samples=20 00:24:07.636 lat (msec) : 4=0.11%, 10=0.91%, 20=4.06%, 50=8.39%, 100=35.32% 00:24:07.636 lat (msec) : 250=51.12%, 500=0.09% 00:24:07.636 cpu : usr=0.30%, sys=1.95%, ctx=1586, majf=0, minf=4097 00:24:07.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=6399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job2: (groupid=0, jobs=1): err= 0: pid=1049322: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=1558, BW=390MiB/s (409MB/s)(3916MiB/10048msec) 00:24:07.637 slat (usec): min=9, max=27534, avg=566.75, stdev=1673.59 00:24:07.637 clat (msec): min=2, max=200, avg=40.46, stdev=17.13 00:24:07.637 lat (msec): min=2, max=200, avg=41.03, stdev=17.28 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 32], 00:24:07.637 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:24:07.637 | 70.00th=[ 41], 80.00th=[ 46], 90.00th=[ 62], 95.00th=[ 75], 00:24:07.637 | 99.00th=[ 100], 99.50th=[ 115], 99.90th=[ 194], 99.95th=[ 197], 00:24:07.637 | 99.99th=[ 201] 00:24:07.637 bw ( KiB/s): min=232448, max=517120, per=20.10%, avg=399334.40, stdev=84932.15, samples=20 00:24:07.637 iops : min= 908, max= 2020, avg=1559.90, stdev=331.77, samples=20 00:24:07.637 lat (msec) : 4=0.02%, 10=0.46%, 20=1.42%, 50=81.52%, 100=15.68% 00:24:07.637 lat (msec) : 250=0.91% 00:24:07.637 cpu : usr=0.78%, sys=4.62%, ctx=2791, majf=0, minf=4097 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=15662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job3: (groupid=0, jobs=1): err= 0: pid=1049323: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=510, BW=128MiB/s (134MB/s)(1291MiB/10107msec) 00:24:07.637 slat (usec): min=10, max=138964, avg=1534.16, stdev=5361.77 00:24:07.637 clat (msec): min=13, max=266, avg=123.64, stdev=44.59 00:24:07.637 lat (msec): min=13, max=375, avg=125.17, stdev=45.42 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 29], 5.00th=[ 49], 10.00th=[ 70], 20.00th=[ 86], 00:24:07.637 | 30.00th=[ 103], 40.00th=[ 113], 50.00th=[ 123], 60.00th=[ 133], 00:24:07.637 | 70.00th=[ 148], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 205], 00:24:07.637 | 99.00th=[ 245], 99.50th=[ 262], 99.90th=[ 266], 99.95th=[ 266], 00:24:07.637 | 99.99th=[ 268] 00:24:07.637 bw ( KiB/s): min=98304, max=221696, per=6.57%, avg=130560.00, stdev=31289.16, samples=20 00:24:07.637 iops : min= 384, max= 866, avg=510.00, stdev=122.22, samples=20 00:24:07.637 lat (msec) : 20=0.33%, 50=5.50%, 100=23.09%, 250=70.31%, 500=0.77% 00:24:07.637 cpu : usr=0.35%, sys=1.77%, ctx=1246, majf=0, minf=3721 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=5163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job4: (groupid=0, jobs=1): err= 0: pid=1049324: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=578, BW=145MiB/s (152MB/s)(1453MiB/10045msec) 00:24:07.637 slat (usec): min=11, max=62138, avg=1445.21, stdev=4468.52 00:24:07.637 clat (msec): min=5, max=288, avg=109.11, stdev=48.81 00:24:07.637 lat (msec): min=5, max=293, avg=110.56, stdev=49.62 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 16], 5.00th=[ 42], 10.00th=[ 57], 20.00th=[ 70], 00:24:07.637 | 30.00th=[ 81], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 110], 00:24:07.637 | 70.00th=[ 132], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 199], 00:24:07.637 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 288], 99.95th=[ 288], 00:24:07.637 | 99.99th=[ 288] 00:24:07.637 bw ( KiB/s): min=72192, max=306688, per=7.40%, avg=147116.35, stdev=56465.82, samples=20 00:24:07.637 iops : min= 282, max= 1198, avg=574.65, stdev=220.55, samples=20 00:24:07.637 lat (msec) : 10=0.46%, 20=1.82%, 50=4.58%, 100=45.13%, 250=47.09% 00:24:07.637 lat (msec) : 500=0.91% 00:24:07.637 cpu : usr=0.31%, sys=2.07%, ctx=1336, majf=0, minf=4097 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=5810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job5: (groupid=0, jobs=1): err= 0: pid=1049325: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=615, BW=154MiB/s (161MB/s)(1555MiB/10101msec) 00:24:07.637 slat (usec): min=11, max=91835, avg=1509.47, stdev=4425.49 00:24:07.637 clat (msec): min=5, max=262, avg=102.32, stdev=39.54 00:24:07.637 lat (msec): min=5, max=262, avg=103.83, stdev=40.16 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 54], 20.00th=[ 70], 00:24:07.637 | 30.00th=[ 81], 40.00th=[ 89], 50.00th=[ 100], 60.00th=[ 112], 00:24:07.637 | 70.00th=[ 125], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 165], 00:24:07.637 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 228], 99.95th=[ 255], 00:24:07.637 | 99.99th=[ 264] 00:24:07.637 bw ( KiB/s): min=99328, max=339456, per=7.93%, avg=157644.80, stdev=55837.02, samples=20 00:24:07.637 iops : min= 388, max= 1326, avg=615.80, stdev=218.11, samples=20 00:24:07.637 lat (msec) : 10=0.10%, 20=0.76%, 50=7.97%, 100=42.47%, 250=48.63% 00:24:07.637 lat (msec) : 500=0.08% 00:24:07.637 cpu : usr=0.40%, sys=2.13%, ctx=1339, majf=0, minf=4097 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=6221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job6: (groupid=0, jobs=1): err= 0: pid=1049326: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=543, BW=136MiB/s (143MB/s)(1366MiB/10053msec) 00:24:07.637 slat (usec): min=9, max=78787, avg=1408.33, stdev=4840.45 00:24:07.637 clat (usec): min=1382, max=308504, avg=116225.78, stdev=55668.98 00:24:07.637 lat (usec): min=1397, max=308564, avg=117634.11, stdev=56458.87 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 35], 20.00th=[ 67], 00:24:07.637 | 30.00th=[ 90], 40.00th=[ 104], 50.00th=[ 116], 60.00th=[ 136], 00:24:07.637 | 70.00th=[ 150], 80.00th=[ 163], 90.00th=[ 186], 95.00th=[ 207], 00:24:07.637 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 271], 99.95th=[ 279], 00:24:07.637 | 99.99th=[ 309] 00:24:07.637 bw ( KiB/s): min=73216, max=259584, per=6.96%, avg=138291.20, stdev=51105.57, samples=20 00:24:07.637 iops : min= 286, max= 1014, avg=540.20, stdev=199.63, samples=20 00:24:07.637 lat (msec) : 2=0.05%, 4=1.54%, 10=1.21%, 20=2.20%, 50=8.95% 00:24:07.637 lat (msec) : 100=23.66%, 250=61.74%, 500=0.66% 00:24:07.637 cpu : usr=0.25%, sys=1.72%, ctx=1409, majf=0, minf=4097 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=5465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job7: (groupid=0, jobs=1): err= 0: pid=1049327: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=686, BW=172MiB/s (180MB/s)(1726MiB/10048msec) 00:24:07.637 slat (usec): min=9, max=141795, avg=801.83, stdev=4943.61 00:24:07.637 clat (usec): min=1079, max=373205, avg=92306.86, stdev=52937.51 00:24:07.637 lat (usec): min=1098, max=377030, avg=93108.69, stdev=53688.17 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 33], 20.00th=[ 45], 00:24:07.637 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 79], 60.00th=[ 94], 00:24:07.637 | 70.00th=[ 123], 80.00th=[ 144], 90.00th=[ 163], 95.00th=[ 188], 00:24:07.637 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 275], 99.95th=[ 330], 00:24:07.637 | 99.99th=[ 372] 00:24:07.637 bw ( KiB/s): min=68096, max=344576, per=8.81%, avg=175088.65, stdev=68202.53, samples=20 00:24:07.637 iops : min= 266, max= 1346, avg=683.90, stdev=266.46, samples=20 00:24:07.637 lat (msec) : 2=0.12%, 4=0.14%, 10=0.68%, 20=2.56%, 50=20.50% 00:24:07.637 lat (msec) : 100=38.05%, 250=37.71%, 500=0.23% 00:24:07.637 cpu : usr=0.35%, sys=1.83%, ctx=1762, majf=0, minf=4097 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=6902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job8: (groupid=0, jobs=1): err= 0: pid=1049328: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=979, BW=245MiB/s (257MB/s)(2476MiB/10110msec) 00:24:07.637 slat (usec): min=10, max=172653, avg=883.10, stdev=3674.02 00:24:07.637 clat (usec): min=908, max=266508, avg=64402.89, stdev=42539.83 00:24:07.637 lat (usec): min=927, max=266552, avg=65285.99, stdev=43049.06 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 32], 00:24:07.637 | 30.00th=[ 37], 40.00th=[ 45], 50.00th=[ 52], 60.00th=[ 63], 00:24:07.637 | 70.00th=[ 79], 80.00th=[ 95], 90.00th=[ 128], 95.00th=[ 150], 00:24:07.637 | 99.00th=[ 220], 99.50th=[ 236], 99.90th=[ 253], 99.95th=[ 259], 00:24:07.637 | 99.99th=[ 268] 00:24:07.637 bw ( KiB/s): min=100352, max=408064, per=12.68%, avg=251878.40, stdev=91241.98, samples=20 00:24:07.637 iops : min= 392, max= 1594, avg=983.90, stdev=356.41, samples=20 00:24:07.637 lat (usec) : 1000=0.01% 00:24:07.637 lat (msec) : 2=0.21%, 4=0.74%, 10=2.96%, 20=4.20%, 50=39.72% 00:24:07.637 lat (msec) : 100=34.56%, 250=17.49%, 500=0.11% 00:24:07.637 cpu : usr=0.51%, sys=3.13%, ctx=1990, majf=0, minf=4097 00:24:07.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:07.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.637 issued rwts: total=9902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.637 job9: (groupid=0, jobs=1): err= 0: pid=1049330: Mon Jul 22 12:19:13 2024 00:24:07.637 read: IOPS=572, BW=143MiB/s (150MB/s)(1447MiB/10108msec) 00:24:07.637 slat (usec): min=13, max=55871, avg=1547.07, stdev=4262.48 00:24:07.637 clat (msec): min=12, max=258, avg=110.11, stdev=34.93 00:24:07.637 lat (msec): min=12, max=264, avg=111.66, stdev=35.50 00:24:07.637 clat percentiles (msec): 00:24:07.637 | 1.00th=[ 32], 5.00th=[ 59], 10.00th=[ 73], 20.00th=[ 84], 00:24:07.637 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 113], 00:24:07.637 | 70.00th=[ 125], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 167], 00:24:07.637 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 251], 99.95th=[ 259], 00:24:07.637 | 99.99th=[ 259] 00:24:07.637 bw ( KiB/s): min=96256, max=195072, per=7.38%, avg=146560.00, stdev=30738.35, samples=20 00:24:07.638 iops : min= 376, max= 762, avg=572.50, stdev=120.07, samples=20 00:24:07.638 lat (msec) : 20=0.02%, 50=3.30%, 100=39.74%, 250=56.82%, 500=0.12% 00:24:07.638 cpu : usr=0.36%, sys=2.05%, ctx=1335, majf=0, minf=4097 00:24:07.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:07.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.638 issued rwts: total=5788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.638 job10: (groupid=0, jobs=1): err= 0: pid=1049331: Mon Jul 22 12:19:13 2024 00:24:07.638 read: IOPS=582, BW=146MiB/s (153MB/s)(1462MiB/10046msec) 00:24:07.638 slat (usec): min=9, max=129494, avg=1355.61, stdev=5660.11 00:24:07.638 clat (msec): min=3, max=324, avg=108.50, stdev=57.86 00:24:07.638 lat (msec): min=3, max=324, avg=109.85, stdev=58.73 00:24:07.638 clat percentiles (msec): 00:24:07.638 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 38], 20.00th=[ 57], 00:24:07.638 | 30.00th=[ 70], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 116], 00:24:07.638 | 70.00th=[ 146], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 207], 00:24:07.638 | 99.00th=[ 249], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 326], 00:24:07.638 | 99.99th=[ 326] 00:24:07.638 bw ( KiB/s): min=77312, max=262656, per=7.46%, avg=148121.60, stdev=51787.34, samples=20 00:24:07.638 iops : min= 302, max= 1026, avg=578.60, stdev=202.29, samples=20 00:24:07.638 lat (msec) : 4=0.03%, 10=0.50%, 20=3.49%, 50=11.92%, 100=37.19% 00:24:07.638 lat (msec) : 250=46.02%, 500=0.85% 00:24:07.638 cpu : usr=0.23%, sys=1.86%, ctx=1323, majf=0, minf=4097 00:24:07.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:07.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:07.638 issued rwts: total=5849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.638 00:24:07.638 Run status group 0 (all jobs): 00:24:07.638 READ: bw=1940MiB/s (2034MB/s), 128MiB/s-390MiB/s (134MB/s-409MB/s), io=19.2GiB (20.6GB), run=10045-10110msec 00:24:07.638 00:24:07.638 Disk stats (read/write): 00:24:07.638 nvme0n1: ios=10400/0, merge=0/0, ticks=1236320/0, in_queue=1236320, util=97.22% 00:24:07.638 nvme10n1: ios=12593/0, merge=0/0, ticks=1233516/0, in_queue=1233516, util=97.43% 00:24:07.638 nvme1n1: ios=30989/0, merge=0/0, ticks=1241996/0, in_queue=1241996, util=97.71% 00:24:07.638 nvme2n1: ios=10161/0, merge=0/0, ticks=1234516/0, in_queue=1234516, util=97.85% 00:24:07.638 nvme3n1: ios=11387/0, merge=0/0, ticks=1234091/0, in_queue=1234091, util=97.91% 00:24:07.638 nvme4n1: ios=12219/0, merge=0/0, ticks=1235226/0, in_queue=1235226, util=98.26% 00:24:07.638 nvme5n1: ios=10693/0, merge=0/0, ticks=1235830/0, in_queue=1235830, util=98.41% 00:24:07.638 nvme6n1: ios=13548/0, merge=0/0, ticks=1244114/0, in_queue=1244114, util=98.51% 00:24:07.638 nvme7n1: ios=19626/0, merge=0/0, ticks=1231966/0, in_queue=1231966, util=98.89% 00:24:07.638 nvme8n1: ios=11375/0, merge=0/0, ticks=1226684/0, in_queue=1226684, util=99.10% 00:24:07.638 nvme9n1: ios=11496/0, merge=0/0, ticks=1236166/0, in_queue=1236166, util=99.20% 00:24:07.638 12:19:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:07.638 [global] 00:24:07.638 thread=1 00:24:07.638 invalidate=1 00:24:07.638 rw=randwrite 00:24:07.638 time_based=1 00:24:07.638 runtime=10 00:24:07.638 ioengine=libaio 00:24:07.638 direct=1 00:24:07.638 bs=262144 00:24:07.638 iodepth=64 00:24:07.638 norandommap=1 00:24:07.638 numjobs=1 00:24:07.638 00:24:07.638 [job0] 00:24:07.638 filename=/dev/nvme0n1 00:24:07.638 [job1] 00:24:07.638 filename=/dev/nvme10n1 00:24:07.638 [job2] 00:24:07.638 filename=/dev/nvme1n1 00:24:07.638 [job3] 00:24:07.638 filename=/dev/nvme2n1 00:24:07.638 [job4] 00:24:07.638 filename=/dev/nvme3n1 00:24:07.638 [job5] 00:24:07.638 filename=/dev/nvme4n1 00:24:07.638 [job6] 00:24:07.638 filename=/dev/nvme5n1 00:24:07.638 [job7] 00:24:07.638 filename=/dev/nvme6n1 00:24:07.638 [job8] 00:24:07.638 filename=/dev/nvme7n1 00:24:07.638 [job9] 00:24:07.638 filename=/dev/nvme8n1 00:24:07.638 [job10] 00:24:07.638 filename=/dev/nvme9n1 00:24:07.638 Could not set queue depth (nvme0n1) 00:24:07.638 Could not set queue depth (nvme10n1) 00:24:07.638 Could not set queue depth (nvme1n1) 00:24:07.638 Could not set queue depth (nvme2n1) 00:24:07.638 Could not set queue depth (nvme3n1) 00:24:07.638 Could not set queue depth (nvme4n1) 00:24:07.638 Could not set queue depth (nvme5n1) 00:24:07.638 Could not set queue depth (nvme6n1) 00:24:07.638 Could not set queue depth (nvme7n1) 00:24:07.638 Could not set queue depth (nvme8n1) 00:24:07.638 Could not set queue depth (nvme9n1) 00:24:07.638 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:07.638 fio-3.35 00:24:07.638 Starting 11 threads 00:24:17.613 00:24:17.613 job0: (groupid=0, jobs=1): err= 0: pid=1050375: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=630, BW=158MiB/s (165MB/s)(1587MiB/10069msec); 0 zone resets 00:24:17.614 slat (usec): min=16, max=97259, avg=961.71, stdev=3574.96 00:24:17.614 clat (usec): min=1553, max=404647, avg=100541.83, stdev=79101.57 00:24:17.614 lat (usec): min=1589, max=412265, avg=101503.54, stdev=80049.23 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 46], 00:24:17.614 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 73], 60.00th=[ 90], 00:24:17.614 | 70.00th=[ 122], 80.00th=[ 148], 90.00th=[ 209], 95.00th=[ 275], 00:24:17.614 | 99.00th=[ 380], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:24:17.614 | 99.99th=[ 405] 00:24:17.614 bw ( KiB/s): min=45056, max=352768, per=11.42%, avg=160809.25, stdev=85799.91, samples=20 00:24:17.614 iops : min= 176, max= 1378, avg=628.10, stdev=335.15, samples=20 00:24:17.614 lat (msec) : 2=0.05%, 4=0.20%, 10=1.13%, 20=1.95%, 50=29.26% 00:24:17.614 lat (msec) : 100=30.52%, 250=30.35%, 500=6.52% 00:24:17.614 cpu : usr=1.84%, sys=2.02%, ctx=3839, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.614 issued rwts: total=0,6346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.614 job1: (groupid=0, jobs=1): err= 0: pid=1050424: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=526, BW=132MiB/s (138MB/s)(1328MiB/10087msec); 0 zone resets 00:24:17.614 slat (usec): min=17, max=39250, avg=985.90, stdev=2995.98 00:24:17.614 clat (usec): min=1765, max=412830, avg=120519.42, stdev=67143.05 00:24:17.614 lat (msec): min=2, max=412, avg=121.51, stdev=67.64 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 42], 20.00th=[ 70], 00:24:17.614 | 30.00th=[ 80], 40.00th=[ 94], 50.00th=[ 112], 60.00th=[ 124], 00:24:17.614 | 70.00th=[ 144], 80.00th=[ 171], 90.00th=[ 226], 95.00th=[ 245], 00:24:17.614 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 388], 99.95th=[ 401], 00:24:17.614 | 99.99th=[ 414] 00:24:17.614 bw ( KiB/s): min=69632, max=193024, per=9.54%, avg=134313.45, stdev=38564.15, samples=20 00:24:17.614 iops : min= 272, max= 754, avg=524.60, stdev=150.58, samples=20 00:24:17.614 lat (msec) : 2=0.02%, 4=0.09%, 10=0.55%, 20=1.98%, 50=11.35% 00:24:17.614 lat (msec) : 100=28.98%, 250=52.80%, 500=4.24% 00:24:17.614 cpu : usr=1.67%, sys=1.86%, ctx=3435, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.614 issued rwts: total=0,5311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.614 job2: (groupid=0, jobs=1): err= 0: pid=1050494: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=416, BW=104MiB/s (109MB/s)(1056MiB/10145msec); 0 zone resets 00:24:17.614 slat (usec): min=16, max=200819, avg=1750.78, stdev=6169.51 00:24:17.614 clat (usec): min=1816, max=385766, avg=151861.05, stdev=73243.85 00:24:17.614 lat (usec): min=1853, max=408679, avg=153611.83, stdev=73707.15 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 4], 5.00th=[ 38], 10.00th=[ 71], 20.00th=[ 103], 00:24:17.614 | 30.00th=[ 122], 40.00th=[ 132], 50.00th=[ 142], 60.00th=[ 153], 00:24:17.614 | 70.00th=[ 165], 80.00th=[ 201], 90.00th=[ 245], 95.00th=[ 300], 00:24:17.614 | 99.00th=[ 380], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:24:17.614 | 99.99th=[ 384] 00:24:17.614 bw ( KiB/s): min=50688, max=154112, per=7.56%, avg=106421.35, stdev=26538.80, samples=20 00:24:17.614 iops : min= 198, max= 602, avg=415.70, stdev=103.67, samples=20 00:24:17.614 lat (msec) : 2=0.07%, 4=1.07%, 10=0.69%, 20=1.09%, 50=3.67% 00:24:17.614 lat (msec) : 100=12.79%, 250=71.46%, 500=9.17% 00:24:17.614 cpu : usr=1.13%, sys=1.61%, ctx=1946, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.614 issued rwts: total=0,4222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.614 job3: (groupid=0, jobs=1): err= 0: pid=1050512: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=463, BW=116MiB/s (121MB/s)(1170MiB/10094msec); 0 zone resets 00:24:17.614 slat (usec): min=23, max=60934, avg=1496.15, stdev=4218.01 00:24:17.614 clat (usec): min=1637, max=437037, avg=136540.53, stdev=82095.38 00:24:17.614 lat (msec): min=2, max=437, avg=138.04, stdev=83.22 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 53], 20.00th=[ 73], 00:24:17.614 | 30.00th=[ 81], 40.00th=[ 103], 50.00th=[ 130], 60.00th=[ 146], 00:24:17.614 | 70.00th=[ 161], 80.00th=[ 182], 90.00th=[ 241], 95.00th=[ 292], 00:24:17.614 | 99.00th=[ 422], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 439], 00:24:17.614 | 99.99th=[ 439] 00:24:17.614 bw ( KiB/s): min=38912, max=212992, per=8.39%, avg=118098.45, stdev=49891.74, samples=20 00:24:17.614 iops : min= 152, max= 832, avg=461.20, stdev=194.82, samples=20 00:24:17.614 lat (msec) : 2=0.02%, 4=0.11%, 10=0.88%, 20=1.71%, 50=6.48% 00:24:17.614 lat (msec) : 100=29.99%, 250=52.44%, 500=8.38% 00:24:17.614 cpu : usr=1.24%, sys=1.70%, ctx=2667, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.614 issued rwts: total=0,4678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.614 job4: (groupid=0, jobs=1): err= 0: pid=1050516: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=498, BW=125MiB/s (131MB/s)(1264MiB/10143msec); 0 zone resets 00:24:17.614 slat (usec): min=16, max=53780, avg=1790.38, stdev=4169.71 00:24:17.614 clat (usec): min=1071, max=304459, avg=126502.17, stdev=65725.67 00:24:17.614 lat (usec): min=1106, max=304496, avg=128292.55, stdev=66644.01 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 51], 20.00th=[ 77], 00:24:17.614 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 122], 60.00th=[ 148], 00:24:17.614 | 70.00th=[ 165], 80.00th=[ 184], 90.00th=[ 213], 95.00th=[ 247], 00:24:17.614 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:24:17.614 | 99.99th=[ 305] 00:24:17.614 bw ( KiB/s): min=57344, max=253440, per=9.08%, avg=127797.15, stdev=55378.16, samples=20 00:24:17.614 iops : min= 224, max= 990, avg=499.20, stdev=216.32, samples=20 00:24:17.614 lat (msec) : 2=0.22%, 4=0.63%, 10=0.89%, 20=1.72%, 50=6.45% 00:24:17.614 lat (msec) : 100=34.91%, 250=50.59%, 500=4.59% 00:24:17.614 cpu : usr=1.47%, sys=1.77%, ctx=1965, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.614 issued rwts: total=0,5056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.614 job5: (groupid=0, jobs=1): err= 0: pid=1050517: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=483, BW=121MiB/s (127MB/s)(1226MiB/10144msec); 0 zone resets 00:24:17.614 slat (usec): min=18, max=62963, avg=1746.56, stdev=4040.63 00:24:17.614 clat (msec): min=2, max=455, avg=130.57, stdev=69.61 00:24:17.614 lat (msec): min=2, max=455, avg=132.32, stdev=70.53 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 70], 20.00th=[ 80], 00:24:17.614 | 30.00th=[ 91], 40.00th=[ 111], 50.00th=[ 124], 60.00th=[ 136], 00:24:17.614 | 70.00th=[ 148], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 275], 00:24:17.614 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 418], 99.95th=[ 435], 00:24:17.614 | 99.99th=[ 456] 00:24:17.614 bw ( KiB/s): min=45056, max=179712, per=8.80%, avg=123906.70, stdev=39614.70, samples=20 00:24:17.614 iops : min= 176, max= 702, avg=484.00, stdev=154.75, samples=20 00:24:17.614 lat (msec) : 4=0.14%, 10=0.51%, 20=1.12%, 50=5.08%, 100=28.71% 00:24:17.614 lat (msec) : 250=58.71%, 500=5.73% 00:24:17.614 cpu : usr=1.57%, sys=1.52%, ctx=2020, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.614 issued rwts: total=0,4904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.614 job6: (groupid=0, jobs=1): err= 0: pid=1050519: Mon Jul 22 12:19:24 2024 00:24:17.614 write: IOPS=493, BW=123MiB/s (129MB/s)(1243MiB/10084msec); 0 zone resets 00:24:17.614 slat (usec): min=18, max=31509, avg=1961.96, stdev=3907.68 00:24:17.614 clat (msec): min=4, max=264, avg=127.78, stdev=59.20 00:24:17.614 lat (msec): min=4, max=266, avg=129.74, stdev=59.97 00:24:17.614 clat percentiles (msec): 00:24:17.614 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 66], 00:24:17.614 | 30.00th=[ 85], 40.00th=[ 108], 50.00th=[ 126], 60.00th=[ 138], 00:24:17.614 | 70.00th=[ 161], 80.00th=[ 182], 90.00th=[ 218], 95.00th=[ 239], 00:24:17.614 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 266], 99.95th=[ 266], 00:24:17.614 | 99.99th=[ 266] 00:24:17.614 bw ( KiB/s): min=67584, max=283648, per=8.92%, avg=125650.60, stdev=56151.98, samples=20 00:24:17.614 iops : min= 264, max= 1108, avg=490.80, stdev=219.36, samples=20 00:24:17.614 lat (msec) : 10=0.14%, 20=0.24%, 50=7.36%, 100=29.61%, 250=60.76% 00:24:17.614 lat (msec) : 500=1.89% 00:24:17.614 cpu : usr=1.71%, sys=1.34%, ctx=1335, majf=0, minf=1 00:24:17.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:17.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.615 issued rwts: total=0,4972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.615 job7: (groupid=0, jobs=1): err= 0: pid=1050523: Mon Jul 22 12:19:24 2024 00:24:17.615 write: IOPS=457, BW=114MiB/s (120MB/s)(1160MiB/10142msec); 0 zone resets 00:24:17.615 slat (usec): min=16, max=148381, avg=1806.30, stdev=4669.02 00:24:17.615 clat (msec): min=4, max=445, avg=138.01, stdev=64.63 00:24:17.615 lat (msec): min=4, max=445, avg=139.82, stdev=65.38 00:24:17.615 clat percentiles (msec): 00:24:17.615 | 1.00th=[ 20], 5.00th=[ 59], 10.00th=[ 73], 20.00th=[ 78], 00:24:17.615 | 30.00th=[ 82], 40.00th=[ 121], 50.00th=[ 136], 60.00th=[ 153], 00:24:17.615 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 215], 95.00th=[ 264], 00:24:17.615 | 99.00th=[ 321], 99.50th=[ 388], 99.90th=[ 443], 99.95th=[ 447], 00:24:17.615 | 99.99th=[ 447] 00:24:17.615 bw ( KiB/s): min=59392, max=212992, per=8.32%, avg=117150.30, stdev=41358.05, samples=20 00:24:17.615 iops : min= 232, max= 832, avg=457.60, stdev=161.56, samples=20 00:24:17.615 lat (msec) : 10=0.09%, 20=0.97%, 50=3.19%, 100=30.99%, 250=58.71% 00:24:17.615 lat (msec) : 500=6.06% 00:24:17.615 cpu : usr=1.53%, sys=1.55%, ctx=1943, majf=0, minf=1 00:24:17.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:17.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.615 issued rwts: total=0,4640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.615 job8: (groupid=0, jobs=1): err= 0: pid=1050526: Mon Jul 22 12:19:24 2024 00:24:17.615 write: IOPS=446, BW=112MiB/s (117MB/s)(1127MiB/10083msec); 0 zone resets 00:24:17.615 slat (usec): min=14, max=61744, avg=1821.44, stdev=4712.60 00:24:17.615 clat (usec): min=1783, max=422134, avg=141255.06, stdev=83629.50 00:24:17.615 lat (usec): min=1812, max=422165, avg=143076.50, stdev=84872.49 00:24:17.615 clat percentiles (msec): 00:24:17.615 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 50], 20.00th=[ 78], 00:24:17.615 | 30.00th=[ 87], 40.00th=[ 117], 50.00th=[ 128], 60.00th=[ 148], 00:24:17.615 | 70.00th=[ 167], 80.00th=[ 186], 90.00th=[ 253], 95.00th=[ 317], 00:24:17.615 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 422], 00:24:17.615 | 99.99th=[ 422] 00:24:17.615 bw ( KiB/s): min=40960, max=185856, per=8.08%, avg=113721.60, stdev=38903.54, samples=20 00:24:17.615 iops : min= 160, max= 726, avg=444.20, stdev=151.98, samples=20 00:24:17.615 lat (msec) : 2=0.02%, 4=0.27%, 10=0.64%, 20=1.98%, 50=7.77% 00:24:17.615 lat (msec) : 100=23.79%, 250=55.17%, 500=10.36% 00:24:17.615 cpu : usr=1.41%, sys=1.53%, ctx=2155, majf=0, minf=1 00:24:17.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:17.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.615 issued rwts: total=0,4506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.615 job9: (groupid=0, jobs=1): err= 0: pid=1050527: Mon Jul 22 12:19:24 2024 00:24:17.615 write: IOPS=591, BW=148MiB/s (155MB/s)(1494MiB/10096msec); 0 zone resets 00:24:17.615 slat (usec): min=23, max=39195, avg=1528.07, stdev=3530.60 00:24:17.615 clat (msec): min=4, max=273, avg=106.57, stdev=64.77 00:24:17.615 lat (msec): min=5, max=273, avg=108.09, stdev=65.68 00:24:17.615 clat percentiles (msec): 00:24:17.615 | 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 45], 00:24:17.615 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 104], 60.00th=[ 122], 00:24:17.615 | 70.00th=[ 153], 80.00th=[ 169], 90.00th=[ 194], 95.00th=[ 226], 00:24:17.615 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:24:17.615 | 99.99th=[ 275] 00:24:17.615 bw ( KiB/s): min=69632, max=364544, per=10.75%, avg=151345.50, stdev=89170.65, samples=20 00:24:17.615 iops : min= 272, max= 1424, avg=591.10, stdev=348.35, samples=20 00:24:17.615 lat (msec) : 10=0.35%, 20=2.19%, 50=32.22%, 100=13.89%, 250=49.67% 00:24:17.615 lat (msec) : 500=1.67% 00:24:17.615 cpu : usr=1.73%, sys=2.07%, ctx=2092, majf=0, minf=1 00:24:17.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:17.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.615 issued rwts: total=0,5975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.615 job10: (groupid=0, jobs=1): err= 0: pid=1050530: Mon Jul 22 12:19:24 2024 00:24:17.615 write: IOPS=515, BW=129MiB/s (135MB/s)(1298MiB/10079msec); 0 zone resets 00:24:17.615 slat (usec): min=17, max=77800, avg=1243.85, stdev=3946.63 00:24:17.615 clat (usec): min=1281, max=388347, avg=122940.95, stdev=80839.70 00:24:17.615 lat (usec): min=1329, max=394594, avg=124184.80, stdev=81889.17 00:24:17.615 clat percentiles (msec): 00:24:17.615 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 52], 00:24:17.615 | 30.00th=[ 71], 40.00th=[ 89], 50.00th=[ 114], 60.00th=[ 127], 00:24:17.615 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 232], 95.00th=[ 288], 00:24:17.615 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:24:17.615 | 99.99th=[ 388] 00:24:17.615 bw ( KiB/s): min=45056, max=233472, per=9.32%, avg=131258.00, stdev=55028.42, samples=20 00:24:17.615 iops : min= 176, max= 912, avg=512.70, stdev=214.91, samples=20 00:24:17.615 lat (msec) : 2=0.15%, 4=0.42%, 10=1.10%, 20=3.31%, 50=14.45% 00:24:17.615 lat (msec) : 100=22.86%, 250=50.62%, 500=7.09% 00:24:17.615 cpu : usr=1.57%, sys=1.65%, ctx=3193, majf=0, minf=1 00:24:17.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:17.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:17.615 issued rwts: total=0,5192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:17.615 00:24:17.615 Run status group 0 (all jobs): 00:24:17.615 WRITE: bw=1375MiB/s (1442MB/s), 104MiB/s-158MiB/s (109MB/s-165MB/s), io=13.6GiB (14.6GB), run=10069-10145msec 00:24:17.615 00:24:17.615 Disk stats (read/write): 00:24:17.615 nvme0n1: ios=49/12322, merge=0/0, ticks=77/1213276, in_queue=1213353, util=96.84% 00:24:17.615 nvme10n1: ios=24/10295, merge=0/0, ticks=128/1221036, in_queue=1221164, util=97.49% 00:24:17.615 nvme1n1: ios=46/8384, merge=0/0, ticks=5207/1200835, in_queue=1206042, util=99.99% 00:24:17.615 nvme2n1: ios=0/9336, merge=0/0, ticks=0/1242228, in_queue=1242228, util=97.56% 00:24:17.615 nvme3n1: ios=45/10055, merge=0/0, ticks=679/1226455, in_queue=1227134, util=100.00% 00:24:17.615 nvme4n1: ios=0/9749, merge=0/0, ticks=0/1230722, in_queue=1230722, util=98.05% 00:24:17.615 nvme5n1: ios=0/9686, merge=0/0, ticks=0/1194861, in_queue=1194861, util=98.17% 00:24:17.615 nvme6n1: ios=45/9225, merge=0/0, ticks=164/1232150, in_queue=1232314, util=99.50% 00:24:17.615 nvme7n1: ios=38/8763, merge=0/0, ticks=575/1199565, in_queue=1200140, util=100.00% 00:24:17.615 nvme8n1: ios=0/11920, merge=0/0, ticks=0/1230517, in_queue=1230517, util=98.94% 00:24:17.615 nvme9n1: ios=0/10056, merge=0/0, ticks=0/1214977, in_queue=1214977, util=99.12% 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:17.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.615 12:19:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:17.615 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.615 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:17.872 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.872 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:18.129 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:18.129 12:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:18.129 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.129 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.129 12:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.129 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:18.385 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:18.385 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:18.385 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.385 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.386 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:18.644 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.644 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:18.902 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:18.902 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.902 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:19.162 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.162 12:19:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:19.162 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.162 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:19.422 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.422 rmmod nvme_tcp 00:24:19.422 rmmod nvme_fabrics 00:24:19.422 rmmod nvme_keyring 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1045059 ']' 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1045059 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1045059 ']' 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1045059 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1045059 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1045059' 00:24:19.422 killing process with pid 1045059 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1045059 00:24:19.422 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1045059 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.990 12:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.897 12:19:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.897 00:24:21.897 real 1m0.858s 00:24:21.897 user 3m24.215s 00:24:21.897 sys 0m24.853s 00:24:21.897 12:19:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.897 12:19:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.897 ************************************ 00:24:21.897 END TEST nvmf_multiconnection 00:24:21.897 ************************************ 00:24:22.155 12:19:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:22.155 12:19:29 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:22.155 12:19:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:22.155 12:19:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.155 12:19:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:22.155 ************************************ 00:24:22.155 START TEST nvmf_initiator_timeout 00:24:22.155 ************************************ 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:22.155 * Looking for test storage... 00:24:22.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.155 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.156 12:19:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:24.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:24.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.058 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:24.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:24.059 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:24:24.059 00:24:24.059 --- 10.0.0.2 ping statistics --- 00:24:24.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.059 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:24:24.059 00:24:24.059 --- 10.0.0.1 ping statistics --- 00:24:24.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.059 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1053855 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1053855 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1053855 ']' 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.059 12:19:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.059 [2024-07-22 12:19:31.939210] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:24:24.059 [2024-07-22 12:19:31.939285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.059 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.059 [2024-07-22 12:19:31.975800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:24.315 [2024-07-22 12:19:32.003753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.315 [2024-07-22 12:19:32.089459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.316 [2024-07-22 12:19:32.089508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.316 [2024-07-22 12:19:32.089537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.316 [2024-07-22 12:19:32.089549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.316 [2024-07-22 12:19:32.089558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.316 [2024-07-22 12:19:32.089703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.316 [2024-07-22 12:19:32.089733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.316 [2024-07-22 12:19:32.089785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.316 [2024-07-22 12:19:32.089787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.316 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.574 Malloc0 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.574 Delay0 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.574 [2024-07-22 12:19:32.281933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.574 [2024-07-22 12:19:32.310200] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.574 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:25.142 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:25.142 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:25.142 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.142 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:25.142 12:19:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:27.067 12:19:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:27.325 12:19:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:27.325 12:19:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:27.325 12:19:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:27.325 12:19:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.325 12:19:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:27.325 12:19:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1054281 00:24:27.325 12:19:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:27.325 12:19:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:27.325 [global] 00:24:27.325 thread=1 00:24:27.325 invalidate=1 00:24:27.325 rw=write 00:24:27.325 time_based=1 00:24:27.325 runtime=60 00:24:27.325 ioengine=libaio 00:24:27.325 direct=1 00:24:27.325 bs=4096 00:24:27.325 iodepth=1 00:24:27.325 norandommap=0 00:24:27.325 numjobs=1 00:24:27.325 00:24:27.325 verify_dump=1 00:24:27.325 verify_backlog=512 00:24:27.325 verify_state_save=0 00:24:27.325 do_verify=1 00:24:27.325 verify=crc32c-intel 00:24:27.325 [job0] 00:24:27.325 filename=/dev/nvme0n1 00:24:27.325 Could not set queue depth (nvme0n1) 00:24:27.325 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:27.325 fio-3.35 00:24:27.325 Starting 1 thread 00:24:30.608 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:30.608 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.608 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.608 true 00:24:30.608 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.609 true 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.609 true 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.609 true 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.609 12:19:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.140 true 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.140 true 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:33.140 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.141 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.141 true 00:24:33.141 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.141 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:33.141 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.141 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.398 true 00:24:33.398 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.398 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:33.398 12:19:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1054281 00:25:29.626 00:25:29.626 job0: (groupid=0, jobs=1): err= 0: pid=1054351: Mon Jul 22 12:20:35 2024 00:25:29.626 read: IOPS=83, BW=332KiB/s (340kB/s)(19.5MiB/60023msec) 00:25:29.626 slat (usec): min=4, max=7828, avg=15.05, stdev=152.97 00:25:29.626 clat (usec): min=272, max=44959, avg=3486.08, stdev=10891.09 00:25:29.626 lat (usec): min=278, max=44978, avg=3501.13, stdev=10894.40 00:25:29.626 clat percentiles (usec): 00:25:29.626 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 334], 00:25:29.626 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 371], 00:25:29.626 | 70.00th=[ 388], 80.00th=[ 416], 90.00th=[ 494], 95.00th=[41157], 00:25:29.626 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:29.626 | 99.99th=[44827] 00:25:29.626 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60023msec); 0 zone resets 00:25:29.626 slat (nsec): min=5768, max=54438, avg=11858.73, stdev=6534.21 00:25:29.626 clat (usec): min=191, max=41239k, avg=8293.32, stdev=576330.40 00:25:29.626 lat (usec): min=199, max=41239k, avg=8305.18, stdev=576330.35 00:25:29.626 clat percentiles (usec): 00:25:29.626 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 00:25:29.626 | 20.00th=[ 215], 30.00th=[ 221], 40.00th=[ 223], 00:25:29.626 | 50.00th=[ 229], 60.00th=[ 235], 70.00th=[ 245], 00:25:29.626 | 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 302], 00:25:29.626 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 420], 00:25:29.626 | 99.95th=[ 469], 99.99th=[17112761] 00:25:29.626 bw ( KiB/s): min= 1600, max= 8192, per=100.00%, avg=5120.00, stdev=2467.03, samples=8 00:25:29.626 iops : min= 400, max= 2048, avg=1280.00, stdev=616.76, samples=8 00:25:29.626 lat (usec) : 250=37.25%, 500=57.94%, 750=1.03%, 1000=0.01% 00:25:29.626 lat (msec) : 2=0.01%, 4=0.01%, 50=3.74%, >=2000=0.01% 00:25:29.626 cpu : usr=0.11%, sys=0.27%, ctx=10111, majf=0, minf=2 00:25:29.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.626 issued rwts: total=4988,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:29.626 00:25:29.626 Run status group 0 (all jobs): 00:25:29.626 READ: bw=332KiB/s (340kB/s), 332KiB/s-332KiB/s (340kB/s-340kB/s), io=19.5MiB (20.4MB), run=60023-60023msec 00:25:29.626 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60023-60023msec 00:25:29.626 00:25:29.626 Disk stats (read/write): 00:25:29.626 nvme0n1: ios=5084/5120, merge=0/0, ticks=17376/1150, in_queue=18526, util=99.83% 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:29.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:29.626 nvmf hotplug test: fio successful as expected 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.626 rmmod nvme_tcp 00:25:29.626 rmmod nvme_fabrics 00:25:29.626 rmmod nvme_keyring 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1053855 ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1053855 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1053855 ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1053855 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1053855 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1053855' 00:25:29.626 killing process with pid 1053855 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1053855 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1053855 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.626 12:20:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.194 12:20:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.194 00:25:30.194 real 1m8.021s 00:25:30.194 user 4m11.107s 00:25:30.194 sys 0m6.279s 00:25:30.194 12:20:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.194 12:20:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:30.194 ************************************ 00:25:30.194 END TEST nvmf_initiator_timeout 00:25:30.194 ************************************ 00:25:30.194 12:20:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:30.194 12:20:37 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:25:30.194 12:20:37 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:25:30.194 12:20:37 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:25:30.194 12:20:37 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.194 12:20:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.098 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.098 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.098 12:20:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:25:32.099 12:20:39 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:32.099 12:20:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:32.099 12:20:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.099 12:20:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.099 ************************************ 00:25:32.099 START TEST nvmf_perf_adq 00:25:32.099 ************************************ 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:32.099 * Looking for test storage... 00:25:32.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.099 12:20:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:34.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:34.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.028 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:34.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:34.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:34.029 12:20:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:34.595 12:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:37.126 12:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:42.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:42.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:42.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:42.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:42.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:25:42.396 00:25:42.396 --- 10.0.0.2 ping statistics --- 00:25:42.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.396 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:25:42.396 00:25:42.396 --- 10.0.0.1 ping statistics --- 00:25:42.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.396 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:42.396 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1065876 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1065876 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1065876 ']' 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 [2024-07-22 12:20:49.680934] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:25:42.397 [2024-07-22 12:20:49.681045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.397 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.397 [2024-07-22 12:20:49.719575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:42.397 [2024-07-22 12:20:49.752210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.397 [2024-07-22 12:20:49.843200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.397 [2024-07-22 12:20:49.843266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.397 [2024-07-22 12:20:49.843292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.397 [2024-07-22 12:20:49.843305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.397 [2024-07-22 12:20:49.843318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.397 [2024-07-22 12:20:49.843408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.397 [2024-07-22 12:20:49.843465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.397 [2024-07-22 12:20:49.843585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.397 [2024-07-22 12:20:49.843587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 [2024-07-22 12:20:50.050257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 Malloc1 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 [2024-07-22 12:20:50.100827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1065905 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:42.397 12:20:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:42.397 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.296 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:44.296 12:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.296 12:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:44.296 12:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.296 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:25:44.296 "tick_rate": 2700000000, 00:25:44.296 "poll_groups": [ 00:25:44.296 { 00:25:44.296 "name": "nvmf_tgt_poll_group_000", 00:25:44.296 "admin_qpairs": 1, 00:25:44.296 "io_qpairs": 1, 00:25:44.296 "current_admin_qpairs": 1, 00:25:44.296 "current_io_qpairs": 1, 00:25:44.296 "pending_bdev_io": 0, 00:25:44.296 "completed_nvme_io": 17625, 00:25:44.296 "transports": [ 00:25:44.296 { 00:25:44.296 "trtype": "TCP" 00:25:44.296 } 00:25:44.296 ] 00:25:44.296 }, 00:25:44.296 { 00:25:44.297 "name": "nvmf_tgt_poll_group_001", 00:25:44.297 "admin_qpairs": 0, 00:25:44.297 "io_qpairs": 1, 00:25:44.297 "current_admin_qpairs": 0, 00:25:44.297 "current_io_qpairs": 1, 00:25:44.297 "pending_bdev_io": 0, 00:25:44.297 "completed_nvme_io": 21094, 00:25:44.297 "transports": [ 00:25:44.297 { 00:25:44.297 "trtype": "TCP" 00:25:44.297 } 00:25:44.297 ] 00:25:44.297 }, 00:25:44.297 { 00:25:44.297 "name": "nvmf_tgt_poll_group_002", 00:25:44.297 "admin_qpairs": 0, 00:25:44.297 "io_qpairs": 1, 00:25:44.297 "current_admin_qpairs": 0, 00:25:44.297 "current_io_qpairs": 1, 00:25:44.297 "pending_bdev_io": 0, 00:25:44.297 "completed_nvme_io": 20818, 00:25:44.297 "transports": [ 00:25:44.297 { 00:25:44.297 "trtype": "TCP" 00:25:44.297 } 00:25:44.297 ] 00:25:44.297 }, 00:25:44.297 { 00:25:44.297 "name": "nvmf_tgt_poll_group_003", 00:25:44.297 "admin_qpairs": 0, 00:25:44.297 "io_qpairs": 1, 00:25:44.297 "current_admin_qpairs": 0, 00:25:44.297 "current_io_qpairs": 1, 00:25:44.297 "pending_bdev_io": 0, 00:25:44.297 "completed_nvme_io": 21138, 00:25:44.297 "transports": [ 00:25:44.297 { 00:25:44.297 "trtype": "TCP" 00:25:44.297 } 00:25:44.297 ] 00:25:44.297 } 00:25:44.297 ] 00:25:44.297 }' 00:25:44.297 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:44.297 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:25:44.297 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:25:44.297 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:25:44.297 12:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1065905 00:25:52.409 Initializing NVMe Controllers 00:25:52.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:52.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:52.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:52.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:52.409 Initialization complete. Launching workers. 00:25:52.409 ======================================================== 00:25:52.409 Latency(us) 00:25:52.409 Device Information : IOPS MiB/s Average min max 00:25:52.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11027.15 43.07 5803.34 5013.11 7393.70 00:25:52.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11026.25 43.07 5804.63 2121.62 8047.94 00:25:52.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10935.65 42.72 5854.55 2823.57 8417.70 00:25:52.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9286.76 36.28 6894.09 2213.33 12804.52 00:25:52.409 ======================================================== 00:25:52.409 Total : 42275.82 165.14 6056.53 2121.62 12804.52 00:25:52.409 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.409 rmmod nvme_tcp 00:25:52.409 rmmod nvme_fabrics 00:25:52.409 rmmod nvme_keyring 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1065876 ']' 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1065876 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1065876 ']' 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1065876 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1065876 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1065876' 00:25:52.409 killing process with pid 1065876 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1065876 00:25:52.409 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1065876 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.667 12:21:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.208 12:21:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:55.208 12:21:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:25:55.208 12:21:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:55.470 12:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:57.366 12:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:02.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:02.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:02.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:02.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:02.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:02.662 00:26:02.662 --- 10.0.0.2 ping statistics --- 00:26:02.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.662 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:26:02.662 00:26:02.662 --- 10.0.0.1 ping statistics --- 00:26:02.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.662 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:02.662 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:02.662 net.core.busy_poll = 1 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:02.663 net.core.busy_read = 1 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1069128 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1069128 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1069128 ']' 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.663 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.663 [2024-07-22 12:21:10.576989] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:02.663 [2024-07-22 12:21:10.577078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.921 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.921 [2024-07-22 12:21:10.617312] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:02.921 [2024-07-22 12:21:10.646674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.921 [2024-07-22 12:21:10.737206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.921 [2024-07-22 12:21:10.737267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.921 [2024-07-22 12:21:10.737283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.921 [2024-07-22 12:21:10.737297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.921 [2024-07-22 12:21:10.737308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.921 [2024-07-22 12:21:10.737393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.921 [2024-07-22 12:21:10.737445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.921 [2024-07-22 12:21:10.737560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.921 [2024-07-22 12:21:10.737562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.921 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 [2024-07-22 12:21:10.948189] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 Malloc1 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.180 12:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 [2024-07-22 12:21:10.998610] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.180 12:21:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.180 12:21:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1069281 00:26:03.180 12:21:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:03.180 12:21:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:03.180 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.085 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:05.085 12:21:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.085 12:21:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.342 12:21:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.342 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:05.342 "tick_rate": 2700000000, 00:26:05.342 "poll_groups": [ 00:26:05.342 { 00:26:05.342 "name": "nvmf_tgt_poll_group_000", 00:26:05.342 "admin_qpairs": 1, 00:26:05.342 "io_qpairs": 3, 00:26:05.342 "current_admin_qpairs": 1, 00:26:05.342 "current_io_qpairs": 3, 00:26:05.342 "pending_bdev_io": 0, 00:26:05.342 "completed_nvme_io": 26936, 00:26:05.342 "transports": [ 00:26:05.342 { 00:26:05.342 "trtype": "TCP" 00:26:05.342 } 00:26:05.342 ] 00:26:05.342 }, 00:26:05.342 { 00:26:05.342 "name": "nvmf_tgt_poll_group_001", 00:26:05.343 "admin_qpairs": 0, 00:26:05.343 "io_qpairs": 1, 00:26:05.343 "current_admin_qpairs": 0, 00:26:05.343 "current_io_qpairs": 1, 00:26:05.343 "pending_bdev_io": 0, 00:26:05.343 "completed_nvme_io": 25985, 00:26:05.343 "transports": [ 00:26:05.343 { 00:26:05.343 "trtype": "TCP" 00:26:05.343 } 00:26:05.343 ] 00:26:05.343 }, 00:26:05.343 { 00:26:05.343 "name": "nvmf_tgt_poll_group_002", 00:26:05.343 "admin_qpairs": 0, 00:26:05.343 "io_qpairs": 0, 00:26:05.343 "current_admin_qpairs": 0, 00:26:05.343 "current_io_qpairs": 0, 00:26:05.343 "pending_bdev_io": 0, 00:26:05.343 "completed_nvme_io": 0, 00:26:05.343 "transports": [ 00:26:05.343 { 00:26:05.343 "trtype": "TCP" 00:26:05.343 } 00:26:05.343 ] 00:26:05.343 }, 00:26:05.343 { 00:26:05.343 "name": "nvmf_tgt_poll_group_003", 00:26:05.343 "admin_qpairs": 0, 00:26:05.343 "io_qpairs": 0, 00:26:05.343 "current_admin_qpairs": 0, 00:26:05.343 "current_io_qpairs": 0, 00:26:05.343 "pending_bdev_io": 0, 00:26:05.343 "completed_nvme_io": 0, 00:26:05.343 "transports": [ 00:26:05.343 { 00:26:05.343 "trtype": "TCP" 00:26:05.343 } 00:26:05.343 ] 00:26:05.343 } 00:26:05.343 ] 00:26:05.343 }' 00:26:05.343 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:05.343 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:05.343 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:05.343 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:05.343 12:21:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1069281 00:26:13.443 Initializing NVMe Controllers 00:26:13.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:13.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:13.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:13.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:13.443 Initialization complete. Launching workers. 00:26:13.443 ======================================================== 00:26:13.443 Latency(us) 00:26:13.443 Device Information : IOPS MiB/s Average min max 00:26:13.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13648.90 53.32 4688.82 1601.43 6925.56 00:26:13.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4658.70 18.20 13741.64 1644.16 60962.46 00:26:13.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4911.20 19.18 13056.35 2162.01 60272.59 00:26:13.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4501.60 17.58 14220.51 1874.22 61429.81 00:26:13.443 ======================================================== 00:26:13.443 Total : 27720.40 108.28 9240.59 1601.43 61429.81 00:26:13.443 00:26:13.443 12:21:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:13.443 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:13.443 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:13.443 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.443 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:13.443 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.444 rmmod nvme_tcp 00:26:13.444 rmmod nvme_fabrics 00:26:13.444 rmmod nvme_keyring 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1069128 ']' 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1069128 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1069128 ']' 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1069128 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1069128 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1069128' 00:26:13.444 killing process with pid 1069128 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1069128 00:26:13.444 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1069128 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.701 12:21:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.230 12:21:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:16.230 12:21:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:16.230 00:26:16.230 real 0m43.731s 00:26:16.230 user 2m34.773s 00:26:16.230 sys 0m11.363s 00:26:16.230 12:21:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:16.230 12:21:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.230 ************************************ 00:26:16.230 END TEST nvmf_perf_adq 00:26:16.230 ************************************ 00:26:16.230 12:21:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:16.230 12:21:23 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:16.230 12:21:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:16.230 12:21:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.230 12:21:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:16.230 ************************************ 00:26:16.230 START TEST nvmf_shutdown 00:26:16.230 ************************************ 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:16.230 * Looking for test storage... 00:26:16.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:16.230 ************************************ 00:26:16.230 START TEST nvmf_shutdown_tc1 00:26:16.230 ************************************ 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.230 12:21:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.125 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:18.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:18.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:18.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:18.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:26:18.126 00:26:18.126 --- 10.0.0.2 ping statistics --- 00:26:18.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.126 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:26:18.126 00:26:18.126 --- 10.0.0.1 ping statistics --- 00:26:18.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.126 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1072432 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1072432 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1072432 ']' 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.126 12:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.126 [2024-07-22 12:21:25.865106] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:18.126 [2024-07-22 12:21:25.865178] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.126 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.126 [2024-07-22 12:21:25.903878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:18.126 [2024-07-22 12:21:25.935478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.126 [2024-07-22 12:21:26.028877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.126 [2024-07-22 12:21:26.028962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.126 [2024-07-22 12:21:26.028987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.126 [2024-07-22 12:21:26.029000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.126 [2024-07-22 12:21:26.029012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.126 [2024-07-22 12:21:26.029095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.126 [2024-07-22 12:21:26.029209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.126 [2024-07-22 12:21:26.029278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.126 [2024-07-22 12:21:26.029276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.384 [2024-07-22 12:21:26.163246] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.384 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.384 Malloc1 00:26:18.384 [2024-07-22 12:21:26.238240] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.384 Malloc2 00:26:18.384 Malloc3 00:26:18.642 Malloc4 00:26:18.642 Malloc5 00:26:18.642 Malloc6 00:26:18.642 Malloc7 00:26:18.642 Malloc8 00:26:18.900 Malloc9 00:26:18.900 Malloc10 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1072550 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1072550 /var/tmp/bdevperf.sock 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1072550 ']' 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:18.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.900 { 00:26:18.900 "params": { 00:26:18.900 "name": "Nvme$subsystem", 00:26:18.900 "trtype": "$TEST_TRANSPORT", 00:26:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.900 "adrfam": "ipv4", 00:26:18.900 "trsvcid": "$NVMF_PORT", 00:26:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.900 "hdgst": ${hdgst:-false}, 00:26:18.900 "ddgst": ${ddgst:-false} 00:26:18.900 }, 00:26:18.900 "method": "bdev_nvme_attach_controller" 00:26:18.900 } 00:26:18.900 EOF 00:26:18.900 )") 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.900 { 00:26:18.900 "params": { 00:26:18.900 "name": "Nvme$subsystem", 00:26:18.900 "trtype": "$TEST_TRANSPORT", 00:26:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.900 "adrfam": "ipv4", 00:26:18.900 "trsvcid": "$NVMF_PORT", 00:26:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.900 "hdgst": ${hdgst:-false}, 00:26:18.900 "ddgst": ${ddgst:-false} 00:26:18.900 }, 00:26:18.900 "method": "bdev_nvme_attach_controller" 00:26:18.900 } 00:26:18.900 EOF 00:26:18.900 )") 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.900 { 00:26:18.900 "params": { 00:26:18.900 "name": "Nvme$subsystem", 00:26:18.900 "trtype": "$TEST_TRANSPORT", 00:26:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.900 "adrfam": "ipv4", 00:26:18.900 "trsvcid": "$NVMF_PORT", 00:26:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.900 "hdgst": ${hdgst:-false}, 00:26:18.900 "ddgst": ${ddgst:-false} 00:26:18.900 }, 00:26:18.900 "method": "bdev_nvme_attach_controller" 00:26:18.900 } 00:26:18.900 EOF 00:26:18.900 )") 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.900 { 00:26:18.900 "params": { 00:26:18.900 "name": "Nvme$subsystem", 00:26:18.900 "trtype": "$TEST_TRANSPORT", 00:26:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.900 "adrfam": "ipv4", 00:26:18.900 "trsvcid": "$NVMF_PORT", 00:26:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.900 "hdgst": ${hdgst:-false}, 00:26:18.900 "ddgst": ${ddgst:-false} 00:26:18.900 }, 00:26:18.900 "method": "bdev_nvme_attach_controller" 00:26:18.900 } 00:26:18.900 EOF 00:26:18.900 )") 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.900 { 00:26:18.900 "params": { 00:26:18.900 "name": "Nvme$subsystem", 00:26:18.900 "trtype": "$TEST_TRANSPORT", 00:26:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.900 "adrfam": "ipv4", 00:26:18.900 "trsvcid": "$NVMF_PORT", 00:26:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.900 "hdgst": ${hdgst:-false}, 00:26:18.900 "ddgst": ${ddgst:-false} 00:26:18.900 }, 00:26:18.900 "method": "bdev_nvme_attach_controller" 00:26:18.900 } 00:26:18.900 EOF 00:26:18.900 )") 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.900 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.900 { 00:26:18.900 "params": { 00:26:18.900 "name": "Nvme$subsystem", 00:26:18.900 "trtype": "$TEST_TRANSPORT", 00:26:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.900 "adrfam": "ipv4", 00:26:18.900 "trsvcid": "$NVMF_PORT", 00:26:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.900 "hdgst": ${hdgst:-false}, 00:26:18.900 "ddgst": ${ddgst:-false} 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 } 00:26:18.901 EOF 00:26:18.901 )") 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.901 { 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme$subsystem", 00:26:18.901 "trtype": "$TEST_TRANSPORT", 00:26:18.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "$NVMF_PORT", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.901 "hdgst": ${hdgst:-false}, 00:26:18.901 "ddgst": ${ddgst:-false} 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 } 00:26:18.901 EOF 00:26:18.901 )") 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.901 { 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme$subsystem", 00:26:18.901 "trtype": "$TEST_TRANSPORT", 00:26:18.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "$NVMF_PORT", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.901 "hdgst": ${hdgst:-false}, 00:26:18.901 "ddgst": ${ddgst:-false} 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 } 00:26:18.901 EOF 00:26:18.901 )") 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.901 { 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme$subsystem", 00:26:18.901 "trtype": "$TEST_TRANSPORT", 00:26:18.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "$NVMF_PORT", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.901 "hdgst": ${hdgst:-false}, 00:26:18.901 "ddgst": ${ddgst:-false} 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 } 00:26:18.901 EOF 00:26:18.901 )") 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.901 { 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme$subsystem", 00:26:18.901 "trtype": "$TEST_TRANSPORT", 00:26:18.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "$NVMF_PORT", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.901 "hdgst": ${hdgst:-false}, 00:26:18.901 "ddgst": ${ddgst:-false} 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 } 00:26:18.901 EOF 00:26:18.901 )") 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:18.901 12:21:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme1", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme2", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme3", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme4", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme5", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme6", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme7", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme8", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme9", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 },{ 00:26:18.901 "params": { 00:26:18.901 "name": "Nvme10", 00:26:18.901 "trtype": "tcp", 00:26:18.901 "traddr": "10.0.0.2", 00:26:18.901 "adrfam": "ipv4", 00:26:18.901 "trsvcid": "4420", 00:26:18.901 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:18.901 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:18.901 "hdgst": false, 00:26:18.901 "ddgst": false 00:26:18.901 }, 00:26:18.901 "method": "bdev_nvme_attach_controller" 00:26:18.901 }' 00:26:18.901 [2024-07-22 12:21:26.733917] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:18.901 [2024-07-22 12:21:26.734018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:18.901 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.901 [2024-07-22 12:21:26.770180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:18.901 [2024-07-22 12:21:26.799537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.160 [2024-07-22 12:21:26.887194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1072550 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:21.054 12:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:22.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1072550 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:22.018 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1072432 00:26:22.018 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:22.018 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:22.018 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:22.018 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.018 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.019 { 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme$subsystem", 00:26:22.019 "trtype": "$TEST_TRANSPORT", 00:26:22.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "$NVMF_PORT", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.019 "hdgst": ${hdgst:-false}, 00:26:22.019 "ddgst": ${ddgst:-false} 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 } 00:26:22.019 EOF 00:26:22.019 )") 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:22.019 12:21:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme1", 00:26:22.019 "trtype": "tcp", 00:26:22.019 "traddr": "10.0.0.2", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "4420", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.019 "hdgst": false, 00:26:22.019 "ddgst": false 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 },{ 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme2", 00:26:22.019 "trtype": "tcp", 00:26:22.019 "traddr": "10.0.0.2", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "4420", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:22.019 "hdgst": false, 00:26:22.019 "ddgst": false 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 },{ 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme3", 00:26:22.019 "trtype": "tcp", 00:26:22.019 "traddr": "10.0.0.2", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "4420", 00:26:22.019 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:22.019 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:22.019 "hdgst": false, 00:26:22.019 "ddgst": false 00:26:22.019 }, 00:26:22.019 "method": "bdev_nvme_attach_controller" 00:26:22.019 },{ 00:26:22.019 "params": { 00:26:22.019 "name": "Nvme4", 00:26:22.019 "trtype": "tcp", 00:26:22.019 "traddr": "10.0.0.2", 00:26:22.019 "adrfam": "ipv4", 00:26:22.019 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 },{ 00:26:22.020 "params": { 00:26:22.020 "name": "Nvme5", 00:26:22.020 "trtype": "tcp", 00:26:22.020 "traddr": "10.0.0.2", 00:26:22.020 "adrfam": "ipv4", 00:26:22.020 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 },{ 00:26:22.020 "params": { 00:26:22.020 "name": "Nvme6", 00:26:22.020 "trtype": "tcp", 00:26:22.020 "traddr": "10.0.0.2", 00:26:22.020 "adrfam": "ipv4", 00:26:22.020 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 },{ 00:26:22.020 "params": { 00:26:22.020 "name": "Nvme7", 00:26:22.020 "trtype": "tcp", 00:26:22.020 "traddr": "10.0.0.2", 00:26:22.020 "adrfam": "ipv4", 00:26:22.020 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 },{ 00:26:22.020 "params": { 00:26:22.020 "name": "Nvme8", 00:26:22.020 "trtype": "tcp", 00:26:22.020 "traddr": "10.0.0.2", 00:26:22.020 "adrfam": "ipv4", 00:26:22.020 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 },{ 00:26:22.020 "params": { 00:26:22.020 "name": "Nvme9", 00:26:22.020 "trtype": "tcp", 00:26:22.020 "traddr": "10.0.0.2", 00:26:22.020 "adrfam": "ipv4", 00:26:22.020 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 },{ 00:26:22.020 "params": { 00:26:22.020 "name": "Nvme10", 00:26:22.020 "trtype": "tcp", 00:26:22.020 "traddr": "10.0.0.2", 00:26:22.020 "adrfam": "ipv4", 00:26:22.020 "trsvcid": "4420", 00:26:22.020 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:22.020 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:22.020 "hdgst": false, 00:26:22.020 "ddgst": false 00:26:22.020 }, 00:26:22.020 "method": "bdev_nvme_attach_controller" 00:26:22.020 }' 00:26:22.020 [2024-07-22 12:21:29.731496] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:22.020 [2024-07-22 12:21:29.731571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072911 ] 00:26:22.020 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.020 [2024-07-22 12:21:29.767625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:22.020 [2024-07-22 12:21:29.796230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.020 [2024-07-22 12:21:29.882880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.915 Running I/O for 1 seconds... 00:26:24.846 00:26:24.846 Latency(us) 00:26:24.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.846 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme1n1 : 1.12 228.96 14.31 0.00 0.00 276767.10 22136.60 259425.47 00:26:24.846 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme2n1 : 1.14 223.90 13.99 0.00 0.00 277414.87 40972.14 239230.67 00:26:24.846 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme3n1 : 1.10 249.43 15.59 0.00 0.00 238493.84 11068.30 253211.69 00:26:24.846 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme4n1 : 1.07 239.73 14.98 0.00 0.00 250395.50 15922.82 254765.13 00:26:24.846 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme5n1 : 1.11 230.81 14.43 0.00 0.00 256203.28 20874.43 264085.81 00:26:24.846 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme6n1 : 1.16 220.80 13.80 0.00 0.00 264339.34 20777.34 274959.93 00:26:24.846 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme7n1 : 1.12 232.69 14.54 0.00 0.00 244066.29 2512.21 217482.43 00:26:24.846 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme8n1 : 1.17 272.52 17.03 0.00 0.00 207335.35 12039.21 260978.92 00:26:24.846 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme9n1 : 1.15 222.20 13.89 0.00 0.00 249100.52 17767.54 259425.47 00:26:24.846 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.846 Verification LBA range: start 0x0 length 0x400 00:26:24.846 Nvme10n1 : 1.17 219.72 13.73 0.00 0.00 248133.40 20971.52 302921.96 00:26:24.846 =================================================================================================================== 00:26:24.846 Total : 2340.75 146.30 0.00 0.00 250059.31 2512.21 302921.96 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.102 rmmod nvme_tcp 00:26:25.102 rmmod nvme_fabrics 00:26:25.102 rmmod nvme_keyring 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1072432 ']' 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1072432 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1072432 ']' 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1072432 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072432 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072432' 00:26:25.102 killing process with pid 1072432 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1072432 00:26:25.102 12:21:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1072432 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.664 12:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.191 00:26:28.191 real 0m11.796s 00:26:28.191 user 0m34.216s 00:26:28.191 sys 0m3.224s 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:28.191 ************************************ 00:26:28.191 END TEST nvmf_shutdown_tc1 00:26:28.191 ************************************ 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:28.191 ************************************ 00:26:28.191 START TEST nvmf_shutdown_tc2 00:26:28.191 ************************************ 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:28.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:28.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:28.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:28.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:26:28.191 00:26:28.191 --- 10.0.0.2 ping statistics --- 00:26:28.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.191 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:26:28.191 00:26:28.191 --- 10.0.0.1 ping statistics --- 00:26:28.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.191 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.191 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1073682 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1073682 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1073682 ']' 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.192 12:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.192 [2024-07-22 12:21:35.775767] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:28.192 [2024-07-22 12:21:35.775853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.192 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.192 [2024-07-22 12:21:35.819318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:28.192 [2024-07-22 12:21:35.848305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.192 [2024-07-22 12:21:35.939726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.192 [2024-07-22 12:21:35.939780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.192 [2024-07-22 12:21:35.939793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.192 [2024-07-22 12:21:35.939805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.192 [2024-07-22 12:21:35.939815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.192 [2024-07-22 12:21:35.939907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.192 [2024-07-22 12:21:35.940021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.192 [2024-07-22 12:21:35.940091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:28.192 [2024-07-22 12:21:35.940094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.192 [2024-07-22 12:21:36.082302] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.192 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.449 Malloc1 00:26:28.449 [2024-07-22 12:21:36.157725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.449 Malloc2 00:26:28.449 Malloc3 00:26:28.449 Malloc4 00:26:28.449 Malloc5 00:26:28.449 Malloc6 00:26:28.706 Malloc7 00:26:28.706 Malloc8 00:26:28.706 Malloc9 00:26:28.706 Malloc10 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1073853 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1073853 /var/tmp/bdevperf.sock 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1073853 ']' 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.706 { 00:26:28.706 "params": { 00:26:28.706 "name": "Nvme$subsystem", 00:26:28.706 "trtype": "$TEST_TRANSPORT", 00:26:28.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.706 "adrfam": "ipv4", 00:26:28.706 "trsvcid": "$NVMF_PORT", 00:26:28.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.706 "hdgst": ${hdgst:-false}, 00:26:28.706 "ddgst": ${ddgst:-false} 00:26:28.706 }, 00:26:28.706 "method": "bdev_nvme_attach_controller" 00:26:28.706 } 00:26:28.706 EOF 00:26:28.706 )") 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.706 { 00:26:28.706 "params": { 00:26:28.706 "name": "Nvme$subsystem", 00:26:28.706 "trtype": "$TEST_TRANSPORT", 00:26:28.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.706 "adrfam": "ipv4", 00:26:28.706 "trsvcid": "$NVMF_PORT", 00:26:28.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.706 "hdgst": ${hdgst:-false}, 00:26:28.706 "ddgst": ${ddgst:-false} 00:26:28.706 }, 00:26:28.706 "method": "bdev_nvme_attach_controller" 00:26:28.706 } 00:26:28.706 EOF 00:26:28.706 )") 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.706 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.706 { 00:26:28.706 "params": { 00:26:28.706 "name": "Nvme$subsystem", 00:26:28.706 "trtype": "$TEST_TRANSPORT", 00:26:28.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.706 "adrfam": "ipv4", 00:26:28.706 "trsvcid": "$NVMF_PORT", 00:26:28.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.706 "hdgst": ${hdgst:-false}, 00:26:28.706 "ddgst": ${ddgst:-false} 00:26:28.706 }, 00:26:28.706 "method": "bdev_nvme_attach_controller" 00:26:28.706 } 00:26:28.706 EOF 00:26:28.706 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.964 { 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme$subsystem", 00:26:28.964 "trtype": "$TEST_TRANSPORT", 00:26:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "$NVMF_PORT", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.964 "hdgst": ${hdgst:-false}, 00:26:28.964 "ddgst": ${ddgst:-false} 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 } 00:26:28.964 EOF 00:26:28.964 )") 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:28.964 12:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme1", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme2", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme3", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme4", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme5", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme6", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme7", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme8", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme9", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 },{ 00:26:28.964 "params": { 00:26:28.964 "name": "Nvme10", 00:26:28.964 "trtype": "tcp", 00:26:28.964 "traddr": "10.0.0.2", 00:26:28.964 "adrfam": "ipv4", 00:26:28.964 "trsvcid": "4420", 00:26:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:28.964 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:28.964 "hdgst": false, 00:26:28.964 "ddgst": false 00:26:28.964 }, 00:26:28.964 "method": "bdev_nvme_attach_controller" 00:26:28.964 }' 00:26:28.964 [2024-07-22 12:21:36.672467] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:28.964 [2024-07-22 12:21:36.672555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073853 ] 00:26:28.964 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.965 [2024-07-22 12:21:36.707182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:28.965 [2024-07-22 12:21:36.736185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.965 [2024-07-22 12:21:36.822802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.859 Running I/O for 10 seconds... 00:26:30.859 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.859 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:30.859 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:30.859 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.859 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:31.116 12:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:31.373 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1073853 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1073853 ']' 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1073853 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1073853 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1073853' 00:26:31.630 killing process with pid 1073853 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1073853 00:26:31.630 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1073853 00:26:31.887 Received shutdown signal, test time was about 0.980243 seconds 00:26:31.887 00:26:31.887 Latency(us) 00:26:31.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.887 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme1n1 : 0.92 207.73 12.98 0.00 0.00 304317.12 17961.72 253211.69 00:26:31.887 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme2n1 : 0.94 272.37 17.02 0.00 0.00 227498.10 20388.98 257872.02 00:26:31.887 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme3n1 : 0.98 261.38 16.34 0.00 0.00 223858.16 29709.65 248551.35 00:26:31.887 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme4n1 : 0.94 271.10 16.94 0.00 0.00 219183.22 18447.17 259425.47 00:26:31.887 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme5n1 : 0.93 206.30 12.89 0.00 0.00 282064.47 23981.32 276513.37 00:26:31.887 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme6n1 : 0.92 209.14 13.07 0.00 0.00 271728.13 20874.43 259425.47 00:26:31.887 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme7n1 : 0.90 212.29 13.27 0.00 0.00 259370.35 37865.24 236123.78 00:26:31.887 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme8n1 : 0.90 213.45 13.34 0.00 0.00 253908.01 22427.88 250104.79 00:26:31.887 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.887 Verification LBA range: start 0x0 length 0x400 00:26:31.887 Nvme9n1 : 0.91 219.22 13.70 0.00 0.00 239675.71 3883.61 234570.33 00:26:31.888 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.888 Verification LBA range: start 0x0 length 0x400 00:26:31.888 Nvme10n1 : 0.93 205.58 12.85 0.00 0.00 253483.99 19903.53 285834.05 00:26:31.888 =================================================================================================================== 00:26:31.888 Total : 2278.57 142.41 0.00 0.00 250739.95 3883.61 285834.05 00:26:31.888 12:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1073682 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:33.258 rmmod nvme_tcp 00:26:33.258 rmmod nvme_fabrics 00:26:33.258 rmmod nvme_keyring 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1073682 ']' 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1073682 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1073682 ']' 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1073682 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1073682 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1073682' 00:26:33.258 killing process with pid 1073682 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1073682 00:26:33.258 12:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1073682 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.516 12:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:36.044 00:26:36.044 real 0m7.835s 00:26:36.044 user 0m24.187s 00:26:36.044 sys 0m1.473s 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.044 ************************************ 00:26:36.044 END TEST nvmf_shutdown_tc2 00:26:36.044 ************************************ 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.044 12:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:36.044 ************************************ 00:26:36.045 START TEST nvmf_shutdown_tc3 00:26:36.045 ************************************ 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:36.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:36.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:36.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:36.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:36.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:26:36.045 00:26:36.045 --- 10.0.0.2 ping statistics --- 00:26:36.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.045 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:36.045 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:26:36.045 00:26:36.045 --- 10.0.0.1 ping statistics --- 00:26:36.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.046 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1074762 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1074762 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1074762 ']' 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.046 [2024-07-22 12:21:43.672397] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:36.046 [2024-07-22 12:21:43.672490] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.046 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.046 [2024-07-22 12:21:43.712736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:36.046 [2024-07-22 12:21:43.738755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.046 [2024-07-22 12:21:43.825645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.046 [2024-07-22 12:21:43.825716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.046 [2024-07-22 12:21:43.825739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.046 [2024-07-22 12:21:43.825750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.046 [2024-07-22 12:21:43.825761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.046 [2024-07-22 12:21:43.825824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.046 [2024-07-22 12:21:43.825882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.046 [2024-07-22 12:21:43.825948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:36.046 [2024-07-22 12:21:43.825950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:36.046 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.304 [2024-07-22 12:21:43.982414] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.304 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.304 Malloc1 00:26:36.304 [2024-07-22 12:21:44.066353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.304 Malloc2 00:26:36.304 Malloc3 00:26:36.304 Malloc4 00:26:36.562 Malloc5 00:26:36.562 Malloc6 00:26:36.562 Malloc7 00:26:36.562 Malloc8 00:26:36.562 Malloc9 00:26:36.562 Malloc10 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1074940 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1074940 /var/tmp/bdevperf.sock 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1074940 ']' 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.820 { 00:26:36.820 "params": { 00:26:36.820 "name": "Nvme$subsystem", 00:26:36.820 "trtype": "$TEST_TRANSPORT", 00:26:36.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.820 "adrfam": "ipv4", 00:26:36.820 "trsvcid": "$NVMF_PORT", 00:26:36.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.820 "hdgst": ${hdgst:-false}, 00:26:36.820 "ddgst": ${ddgst:-false} 00:26:36.820 }, 00:26:36.820 "method": "bdev_nvme_attach_controller" 00:26:36.820 } 00:26:36.820 EOF 00:26:36.820 )") 00:26:36.820 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.821 { 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme$subsystem", 00:26:36.821 "trtype": "$TEST_TRANSPORT", 00:26:36.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "$NVMF_PORT", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.821 "hdgst": ${hdgst:-false}, 00:26:36.821 "ddgst": ${ddgst:-false} 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 } 00:26:36.821 EOF 00:26:36.821 )") 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.821 { 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme$subsystem", 00:26:36.821 "trtype": "$TEST_TRANSPORT", 00:26:36.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "$NVMF_PORT", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.821 "hdgst": ${hdgst:-false}, 00:26:36.821 "ddgst": ${ddgst:-false} 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 } 00:26:36.821 EOF 00:26:36.821 )") 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:36.821 { 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme$subsystem", 00:26:36.821 "trtype": "$TEST_TRANSPORT", 00:26:36.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "$NVMF_PORT", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.821 "hdgst": ${hdgst:-false}, 00:26:36.821 "ddgst": ${ddgst:-false} 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 } 00:26:36.821 EOF 00:26:36.821 )") 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:36.821 12:21:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme1", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme2", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme3", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme4", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme5", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme6", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme7", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme8", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme9", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 },{ 00:26:36.821 "params": { 00:26:36.821 "name": "Nvme10", 00:26:36.821 "trtype": "tcp", 00:26:36.821 "traddr": "10.0.0.2", 00:26:36.821 "adrfam": "ipv4", 00:26:36.821 "trsvcid": "4420", 00:26:36.821 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:36.821 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:36.821 "hdgst": false, 00:26:36.821 "ddgst": false 00:26:36.821 }, 00:26:36.821 "method": "bdev_nvme_attach_controller" 00:26:36.821 }' 00:26:36.821 [2024-07-22 12:21:44.589058] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:36.821 [2024-07-22 12:21:44.589133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074940 ] 00:26:36.821 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.821 [2024-07-22 12:21:44.624120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:36.821 [2024-07-22 12:21:44.653086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.821 [2024-07-22 12:21:44.739584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.716 Running I/O for 10 seconds... 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:38.716 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:38.974 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:38.974 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:38.974 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:38.974 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:38.974 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.974 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.231 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.231 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:39.231 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:39.231 12:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:39.490 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1074762 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1074762 ']' 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1074762 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1074762 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1074762' 00:26:39.491 killing process with pid 1074762 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1074762 00:26:39.491 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1074762 00:26:39.491 [2024-07-22 12:21:47.248018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.248910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fde0 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.491 [2024-07-22 12:21:47.250514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.250997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.251107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52870 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.254990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.255002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.255015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.492 [2024-07-22 12:21:47.255027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50c10 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.255991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.256820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e510c0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.257868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.257919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-07-22 12:21:47.257938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.257952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-07-22 12:21:47.257966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.257979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-07-22 12:21:47.257993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.258006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-07-22 12:21:47.258019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd29ad0 is same with the state(5) to be set 00:26:39.493 [2024-07-22 12:21:47.258073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.258094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-07-22 12:21:47.258110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.258124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-07-22 12:21:47.258139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.493 [2024-07-22 12:21:47.258152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb82380 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb690 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with [2024-07-22 12:21:47.258570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb80f90 is same wthe state(5) to be set 00:26:39.494 ith the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with [2024-07-22 12:21:47.258653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:39.494 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-22 12:21:47.258692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff30 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with [2024-07-22 12:21:47.258839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:39.494 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.494 [2024-07-22 12:21:47.258926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.494 [2024-07-22 12:21:47.258939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5df10 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.258989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.494 [2024-07-22 12:21:47.259137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51590 is same with the state(5) to be set 00:26:39.495 [2024-07-22 12:21:47.259439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.259978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.259994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-07-22 12:21:47.260402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.495 [2024-07-22 12:21:47.260418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-22 12:21:47.260776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with [2024-07-22 12:21:47.260790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:39.496 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with [2024-07-22 12:21:47.260869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(5) to be set 00:26:39.496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with [2024-07-22 12:21:47.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128the state(5) to be set 00:26:39.496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.260985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.260992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.260998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.261011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.261023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.261049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.261062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.261074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.261086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-07-22 12:21:47.261099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.261115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.261145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.261158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.261171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.261184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with [2024-07-22 12:21:47.261197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12the state(5) to be set 00:26:39.496 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-07-22 12:21:47.261211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.496 [2024-07-22 12:21:47.261214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.496 [2024-07-22 12:21:47.261224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12[2024-07-22 12:21:47.261262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.261277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with [2024-07-22 12:21:47.261308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:39.497 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.261433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:39.497 [2024-07-22 12:21:47.261482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261552] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc2aa00 was disconnected and freed. reset controller. 00:26:39.497 [2024-07-22 12:21:47.261558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51a40 is same with the state(5) to be set 00:26:39.497 [2024-07-22 12:21:47.261623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.261980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.261995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.497 [2024-07-22 12:21:47.262336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-07-22 12:21:47.262354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51ef0 is same with [2024-07-22 12:21:47.262562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:39.498 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:1[2024-07-22 12:21:47.262579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51ef0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.262595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51ef0 is same with [2024-07-22 12:21:47.262595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:39.498 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.262945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.262961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.262975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.262983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.262990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.262995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.263008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.263035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.263060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:1[2024-07-22 12:21:47.263085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with [2024-07-22 12:21:47.263101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:39.498 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.263128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.263154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:1[2024-07-22 12:21:47.263179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.263194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.263239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-07-22 12:21:47.263264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.498 [2024-07-22 12:21:47.263276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.498 [2024-07-22 12:21:47.263289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with [2024-07-22 12:21:47.263288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:1the state(5) to be set 00:26:39.499 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-07-22 12:21:47.263303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-07-22 12:21:47.263315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-07-22 12:21:47.263328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-07-22 12:21:47.263340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-07-22 12:21:47.263352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-07-22 12:21:47.263365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-07-22 12:21:47.263396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with [2024-07-22 12:21:47.263397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:39.499 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.499 [2024-07-22 12:21:47.263416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-07-22 12:21:47.263429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.499 [2024-07-22 12:21:47.263433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.263442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.263454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.263466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:1[2024-07-22 12:21:47.263479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.263492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.263518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.263530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.263543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.263555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:1[2024-07-22 12:21:47.263567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-22 12:21:47.263581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.263644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.263646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09850 is same w[2024-07-22 12:21:47.263661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with ith the state(5) to be set 00:26:39.500 the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263727] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd09850 was disconnected and freed. reset controller. 00:26:39.500 [2024-07-22 12:21:47.263737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.263781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.264270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.264294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.264306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e523a0 is same with the state(5) to be set 00:26:39.500 [2024-07-22 12:21:47.266926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.266953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.266975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.266990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.500 [2024-07-22 12:21:47.267204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.500 [2024-07-22 12:21:47.267219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:40.070 12:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:40.070 [2024-07-22 12:21:47.735643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.735971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.735986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.070 [2024-07-22 12:21:47.736790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.070 [2024-07-22 12:21:47.736808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.736823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.736840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.736855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.736872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.736888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.736910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.736924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.736941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.736957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.736973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.736988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.071 [2024-07-22 12:21:47.737513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.737662] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd0cb70 was disconnected and freed. reset controller. 00:26:40.071 [2024-07-22 12:21:47.737929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:40.071 [2024-07-22 12:21:47.737975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:40.071 [2024-07-22 12:21:47.738025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ff30 (9): Bad file descriptor 00:26:40.071 [2024-07-22 12:21:47.738052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd29ad0 (9): Bad file descriptor 00:26:40.071 [2024-07-22 12:21:47.738082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb82380 (9): Bad file descriptor 00:26:40.071 [2024-07-22 12:21:47.738161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x653610 is same with the state(5) to be set 00:26:40.071 [2024-07-22 12:21:47.738367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25470 is same with the state(5) to be set 00:26:40.071 [2024-07-22 12:21:47.738542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28f70 is same with the state(5) to be set 00:26:40.071 [2024-07-22 12:21:47.738744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb690 (9): Bad file descriptor 00:26:40.071 [2024-07-22 12:21:47.738796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.071 [2024-07-22 12:21:47.738907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.071 [2024-07-22 12:21:47.738920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbaa30 is same with the state(5) to be set 00:26:40.072 [2024-07-22 12:21:47.738962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb80f90 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.738993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5df10 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.740919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:40.072 [2024-07-22 12:21:47.740969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25470 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.741716] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.742085] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.742287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-07-22 12:21:47.742316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd29ad0 with addr=10.0.0.2, port=4420 00:26:40.072 [2024-07-22 12:21:47.742334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd29ad0 is same with the state(5) to be set 00:26:40.072 [2024-07-22 12:21:47.742485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-07-22 12:21:47.742509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ff30 with addr=10.0.0.2, port=4420 00:26:40.072 [2024-07-22 12:21:47.742525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff30 is same with the state(5) to be set 00:26:40.072 [2024-07-22 12:21:47.742660] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.742774] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.742850] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.742927] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.743437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-07-22 12:21:47.743465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25470 with addr=10.0.0.2, port=4420 00:26:40.072 [2024-07-22 12:21:47.743482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25470 is same with the state(5) to be set 00:26:40.072 [2024-07-22 12:21:47.743501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd29ad0 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.743520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ff30 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.743681] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.072 [2024-07-22 12:21:47.743723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25470 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.743745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:40.072 [2024-07-22 12:21:47.743759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:40.072 [2024-07-22 12:21:47.743776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:40.072 [2024-07-22 12:21:47.743796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:40.072 [2024-07-22 12:21:47.743810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:40.072 [2024-07-22 12:21:47.743823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:40.072 [2024-07-22 12:21:47.743908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.072 [2024-07-22 12:21:47.743930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.072 [2024-07-22 12:21:47.743943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:40.072 [2024-07-22 12:21:47.743956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:40.072 [2024-07-22 12:21:47.743969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:40.072 [2024-07-22 12:21:47.744041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.072 [2024-07-22 12:21:47.747966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x653610 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.748013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28f70 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.748055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbaa30 (9): Bad file descriptor 00:26:40.072 [2024-07-22 12:21:47.748208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.748976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.748992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.749010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.749026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.749043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.749058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.749077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.749110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.072 [2024-07-22 12:21:47.749126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.072 [2024-07-22 12:21:47.749143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.749970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.749988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.750443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.750459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc296c0 is same with the state(5) to be set 00:26:40.073 [2024-07-22 12:21:47.751848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.751875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.751913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.751932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.751950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.751967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.751985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.752000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.073 [2024-07-22 12:21:47.752018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.073 [2024-07-22 12:21:47.752034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.752970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.752987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.074 [2024-07-22 12:21:47.753228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.074 [2024-07-22 12:21:47.753243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.753970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.753987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.754003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.754020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.754037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.754055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.754073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.754091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58490 is same with the state(5) to be set 00:26:40.075 [2024-07-22 12:21:47.755445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.755973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.755988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.756006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.756022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.756040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.756056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.756074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.075 [2024-07-22 12:21:47.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.075 [2024-07-22 12:21:47.756107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.756974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.756990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.076 [2024-07-22 12:21:47.757559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.076 [2024-07-22 12:21:47.757576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.757592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.757625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.757643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.757662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.757677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.757693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59910 is same with the state(5) to be set 00:26:40.077 [2024-07-22 12:21:47.759059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.759986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.077 [2024-07-22 12:21:47.760423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.077 [2024-07-22 12:21:47.760438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.760980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.760995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.761262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.761277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc8df0 is same with the state(5) to be set 00:26:40.078 [2024-07-22 12:21:47.763049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.078 [2024-07-22 12:21:47.763081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:40.078 [2024-07-22 12:21:47.763110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:40.078 [2024-07-22 12:21:47.763129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:40.078 [2024-07-22 12:21:47.763642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-07-22 12:21:47.763677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5df10 with addr=10.0.0.2, port=4420 00:26:40.078 [2024-07-22 12:21:47.763695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5df10 is same with the state(5) to be set 00:26:40.078 [2024-07-22 12:21:47.763832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-07-22 12:21:47.763863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb82380 with addr=10.0.0.2, port=4420 00:26:40.078 [2024-07-22 12:21:47.763880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb82380 is same with the state(5) to be set 00:26:40.078 [2024-07-22 12:21:47.764065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-07-22 12:21:47.764093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb80f90 with addr=10.0.0.2, port=4420 00:26:40.078 [2024-07-22 12:21:47.764111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb80f90 is same with the state(5) to be set 00:26:40.078 [2024-07-22 12:21:47.764281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-07-22 12:21:47.764307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdb690 with addr=10.0.0.2, port=4420 00:26:40.078 [2024-07-22 12:21:47.764323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb690 is same with the state(5) to be set 00:26:40.078 [2024-07-22 12:21:47.765230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.765257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.765283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.765307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.765326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.765342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.765360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.765376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.765393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.078 [2024-07-22 12:21:47.765409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.078 [2024-07-22 12:21:47.765426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.765980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.765997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.079 [2024-07-22 12:21:47.766913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.079 [2024-07-22 12:21:47.766930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.766946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.766964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.766980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.766997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.767448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.767467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0a1f0 is same with the state(5) to be set 00:26:40.080 [2024-07-22 12:21:47.768842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.768870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.768892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.768917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.768935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.768951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.768969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.768986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.080 [2024-07-22 12:21:47.769753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.080 [2024-07-22 12:21:47.769771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.769805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.769839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.769872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.769907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.769940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.769974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.769990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.770972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.770987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.771005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.771024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.771042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.771058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.771074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0b6b0 is same with the state(5) to be set 00:26:40.081 [2024-07-22 12:21:47.772426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.772453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.772475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.772493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.772511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.772527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.772544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.081 [2024-07-22 12:21:47.772559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.081 [2024-07-22 12:21:47.772577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.772978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.772996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.082 [2024-07-22 12:21:47.773836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.082 [2024-07-22 12:21:47.773854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.773869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.773887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.773902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.773920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.773936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.773954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.773970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.773987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.083 [2024-07-22 12:21:47.774610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.083 [2024-07-22 12:21:47.774634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc7930 is same with the state(5) to be set 00:26:40.083 [2024-07-22 12:21:47.776405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:40.083 [2024-07-22 12:21:47.776438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:40.083 [2024-07-22 12:21:47.776457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:40.083 [2024-07-22 12:21:47.776475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:40.083 [2024-07-22 12:21:47.776503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:40.083 task offset: 27136 on job bdev=Nvme2n1 fails 00:26:40.083 00:26:40.083 Latency(us) 00:26:40.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme1n1 ended in about 1.51 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme1n1 : 1.51 84.89 5.31 42.45 0.00 499189.63 42913.94 664874.86 00:26:40.083 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme2n1 ended in about 1.02 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme2n1 : 1.02 187.70 11.73 62.57 0.00 248503.13 5412.79 250104.79 00:26:40.083 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme3n1 ended in about 1.02 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme3n1 : 1.02 250.00 15.63 62.50 0.00 195310.25 8835.22 254765.13 00:26:40.083 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme4n1 ended in about 1.51 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme4n1 : 1.51 127.03 7.94 42.34 0.00 361346.28 23398.78 602737.02 00:26:40.083 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme5n1 ended in about 1.51 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme5n1 : 1.51 93.73 5.86 42.24 0.00 444705.95 22622.06 723905.80 00:26:40.083 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme6n1 ended in about 1.52 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme6n1 : 1.52 125.92 7.87 41.97 0.00 355475.72 22913.33 587202.56 00:26:40.083 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme7n1 ended in about 1.53 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme7n1 : 1.53 125.63 7.85 41.88 0.00 351930.41 18544.26 680409.32 00:26:40.083 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme8n1 ended in about 1.50 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme8n1 : 1.50 128.27 8.02 42.76 0.00 339632.17 22622.06 540599.18 00:26:40.083 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme9n1 ended in about 1.53 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme9n1 : 1.53 83.56 5.22 41.78 0.00 458770.71 23107.51 742547.15 00:26:40.083 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.083 Job: Nvme10n1 ended in about 1.52 seconds with error 00:26:40.083 Verification LBA range: start 0x0 length 0x400 00:26:40.083 Nvme10n1 : 1.52 126.43 7.90 42.14 0.00 336344.56 20291.89 640019.72 00:26:40.083 =================================================================================================================== 00:26:40.083 Total : 1333.18 83.32 462.63 0.00 346812.98 5412.79 742547.15 00:26:40.083 [2024-07-22 12:21:47.804959] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:40.083 [2024-07-22 12:21:47.805146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5df10 (9): Bad file descriptor 00:26:40.083 [2024-07-22 12:21:47.805191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb82380 (9): Bad file descriptor 00:26:40.083 [2024-07-22 12:21:47.805213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb80f90 (9): Bad file descriptor 00:26:40.083 [2024-07-22 12:21:47.805233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb690 (9): Bad file descriptor 00:26:40.083 [2024-07-22 12:21:47.805303] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.083 [2024-07-22 12:21:47.805336] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.083 [2024-07-22 12:21:47.805359] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.083 [2024-07-22 12:21:47.805380] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-22 12:21:47.805402] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-22 12:21:47.805547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:40.084 [2024-07-22 12:21:47.805912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-07-22 12:21:47.805950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ff30 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-22 12:21:47.805972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff30 is same with the state(5) to be set 00:26:40.084 [2024-07-22 12:21:47.806250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-07-22 12:21:47.806280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd29ad0 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-22 12:21:47.806297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd29ad0 is same with the state(5) to be set 00:26:40.084 [2024-07-22 12:21:47.806453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-07-22 12:21:47.806483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25470 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-22 12:21:47.806501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25470 is same with the state(5) to be set 00:26:40.084 [2024-07-22 12:21:47.806689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-07-22 12:21:47.806731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x653610 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-22 12:21:47.806751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x653610 is same with the state(5) to be set 00:26:40.084 [2024-07-22 12:21:47.806928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-07-22 12:21:47.806969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbaa30 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-22 12:21:47.806985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbaa30 is same with the state(5) to be set 00:26:40.084 [2024-07-22 12:21:47.807001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.807014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.807030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.084 [2024-07-22 12:21:47.807051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.807065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.807097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:40.084 [2024-07-22 12:21:47.807116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.807129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.807156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:40.084 [2024-07-22 12:21:47.807172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.807184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.807195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:40.084 [2024-07-22 12:21:47.807249] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-22 12:21:47.807275] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-22 12:21:47.807298] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-22 12:21:47.807319] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-22 12:21:47.808264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.808293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.808307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.808320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.808504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-07-22 12:21:47.808528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd28f70 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-22 12:21:47.808543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28f70 is same with the state(5) to be set 00:26:40.084 [2024-07-22 12:21:47.808561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ff30 (9): Bad file descriptor 00:26:40.084 [2024-07-22 12:21:47.808580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd29ad0 (9): Bad file descriptor 00:26:40.084 [2024-07-22 12:21:47.808601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25470 (9): Bad file descriptor 00:26:40.084 [2024-07-22 12:21:47.808626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x653610 (9): Bad file descriptor 00:26:40.084 [2024-07-22 12:21:47.808645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbaa30 (9): Bad file descriptor 00:26:40.084 [2024-07-22 12:21:47.809017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28f70 (9): Bad file descriptor 00:26:40.084 [2024-07-22 12:21:47.809048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.809064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.809079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:40.084 [2024-07-22 12:21:47.809098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.809114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.809129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:40.084 [2024-07-22 12:21:47.809147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.809163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.809177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:40.084 [2024-07-22 12:21:47.809195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.809210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.809223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:40.084 [2024-07-22 12:21:47.809241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.809256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.809269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:40.084 [2024-07-22 12:21:47.809336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.809359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.809373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.809385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.809398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-22 12:21:47.809411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:40.084 [2024-07-22 12:21:47.809425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:40.084 [2024-07-22 12:21:47.809440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:40.084 [2024-07-22 12:21:47.809484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1074940 00:26:41.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1074940) - No such process 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:41.018 rmmod nvme_tcp 00:26:41.018 rmmod nvme_fabrics 00:26:41.018 rmmod nvme_keyring 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.018 12:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.919 12:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:42.919 00:26:42.919 real 0m7.390s 00:26:42.919 user 0m17.594s 00:26:42.919 sys 0m1.638s 00:26:42.919 12:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:42.919 12:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:42.919 ************************************ 00:26:42.919 END TEST nvmf_shutdown_tc3 00:26:42.919 ************************************ 00:26:42.919 12:21:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:26:42.919 12:21:50 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:43.176 00:26:43.176 real 0m27.240s 00:26:43.176 user 1m16.078s 00:26:43.176 sys 0m6.486s 00:26:43.176 12:21:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:43.176 12:21:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.176 ************************************ 00:26:43.176 END TEST nvmf_shutdown 00:26:43.176 ************************************ 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:43.177 12:21:50 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.177 12:21:50 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.177 12:21:50 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:26:43.177 12:21:50 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.177 12:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.177 ************************************ 00:26:43.177 START TEST nvmf_multicontroller 00:26:43.177 ************************************ 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:43.177 * Looking for test storage... 00:26:43.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.177 12:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:45.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:45.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:45.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:45.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.078 12:21:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.078 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.078 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:26:45.336 00:26:45.336 --- 10.0.0.2 ping statistics --- 00:26:45.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.336 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:26:45.336 00:26:45.336 --- 10.0.0.1 ping statistics --- 00:26:45.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.336 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1077456 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1077456 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1077456 ']' 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.336 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.336 [2024-07-22 12:21:53.135881] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:45.336 [2024-07-22 12:21:53.135975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.336 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.336 [2024-07-22 12:21:53.174390] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.336 [2024-07-22 12:21:53.205203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:45.594 [2024-07-22 12:21:53.298425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.594 [2024-07-22 12:21:53.298482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.594 [2024-07-22 12:21:53.298499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.594 [2024-07-22 12:21:53.298512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.594 [2024-07-22 12:21:53.298524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.594 [2024-07-22 12:21:53.298884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.594 [2024-07-22 12:21:53.298972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.594 [2024-07-22 12:21:53.298973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 [2024-07-22 12:21:53.425206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 Malloc0 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 [2024-07-22 12:21:53.490496] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.594 [2024-07-22 12:21:53.498392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.594 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.852 Malloc1 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1077486 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1077486 /var/tmp/bdevperf.sock 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1077486 ']' 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:45.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.852 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.110 NVMe0n1 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.110 12:21:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.110 1 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.110 request: 00:26:46.110 { 00:26:46.110 "name": "NVMe0", 00:26:46.110 "trtype": "tcp", 00:26:46.110 "traddr": "10.0.0.2", 00:26:46.110 "adrfam": "ipv4", 00:26:46.110 "trsvcid": "4420", 00:26:46.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.110 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:46.110 "hostaddr": "10.0.0.2", 00:26:46.110 "hostsvcid": "60000", 00:26:46.110 "prchk_reftag": false, 00:26:46.110 "prchk_guard": false, 00:26:46.110 "hdgst": false, 00:26:46.110 "ddgst": false, 00:26:46.110 "method": "bdev_nvme_attach_controller", 00:26:46.110 "req_id": 1 00:26:46.110 } 00:26:46.110 Got JSON-RPC error response 00:26:46.110 response: 00:26:46.110 { 00:26:46.110 "code": -114, 00:26:46.110 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:46.110 } 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.110 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.368 request: 00:26:46.368 { 00:26:46.368 "name": "NVMe0", 00:26:46.368 "trtype": "tcp", 00:26:46.368 "traddr": "10.0.0.2", 00:26:46.368 "adrfam": "ipv4", 00:26:46.368 "trsvcid": "4420", 00:26:46.368 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:46.368 "hostaddr": "10.0.0.2", 00:26:46.368 "hostsvcid": "60000", 00:26:46.368 "prchk_reftag": false, 00:26:46.368 "prchk_guard": false, 00:26:46.368 "hdgst": false, 00:26:46.368 "ddgst": false, 00:26:46.368 "method": "bdev_nvme_attach_controller", 00:26:46.368 "req_id": 1 00:26:46.368 } 00:26:46.368 Got JSON-RPC error response 00:26:46.368 response: 00:26:46.368 { 00:26:46.368 "code": -114, 00:26:46.368 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:46.368 } 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.368 request: 00:26:46.368 { 00:26:46.368 "name": "NVMe0", 00:26:46.368 "trtype": "tcp", 00:26:46.368 "traddr": "10.0.0.2", 00:26:46.368 "adrfam": "ipv4", 00:26:46.368 "trsvcid": "4420", 00:26:46.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.368 "hostaddr": "10.0.0.2", 00:26:46.368 "hostsvcid": "60000", 00:26:46.368 "prchk_reftag": false, 00:26:46.368 "prchk_guard": false, 00:26:46.368 "hdgst": false, 00:26:46.368 "ddgst": false, 00:26:46.368 "multipath": "disable", 00:26:46.368 "method": "bdev_nvme_attach_controller", 00:26:46.368 "req_id": 1 00:26:46.368 } 00:26:46.368 Got JSON-RPC error response 00:26:46.368 response: 00:26:46.368 { 00:26:46.368 "code": -114, 00:26:46.368 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:46.368 } 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.368 request: 00:26:46.368 { 00:26:46.368 "name": "NVMe0", 00:26:46.368 "trtype": "tcp", 00:26:46.368 "traddr": "10.0.0.2", 00:26:46.368 "adrfam": "ipv4", 00:26:46.368 "trsvcid": "4420", 00:26:46.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.368 "hostaddr": "10.0.0.2", 00:26:46.368 "hostsvcid": "60000", 00:26:46.368 "prchk_reftag": false, 00:26:46.368 "prchk_guard": false, 00:26:46.368 "hdgst": false, 00:26:46.368 "ddgst": false, 00:26:46.368 "multipath": "failover", 00:26:46.368 "method": "bdev_nvme_attach_controller", 00:26:46.368 "req_id": 1 00:26:46.368 } 00:26:46.368 Got JSON-RPC error response 00:26:46.368 response: 00:26:46.368 { 00:26:46.368 "code": -114, 00:26:46.368 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:46.368 } 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.368 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.368 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.369 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:46.369 12:21:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:47.738 0 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1077486 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1077486 ']' 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1077486 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077486 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077486' 00:26:47.738 killing process with pid 1077486 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1077486 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1077486 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:26:47.738 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:26:47.738 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:47.738 [2024-07-22 12:21:53.595549] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:47.738 [2024-07-22 12:21:53.595648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077486 ] 00:26:47.738 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.738 [2024-07-22 12:21:53.627401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:47.738 [2024-07-22 12:21:53.655794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.738 [2024-07-22 12:21:53.740994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.738 [2024-07-22 12:21:54.217727] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name f90779c4-778d-48f7-b512-90df8b25d850 already exists 00:26:47.738 [2024-07-22 12:21:54.217767] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:f90779c4-778d-48f7-b512-90df8b25d850 alias for bdev NVMe1n1 00:26:47.738 [2024-07-22 12:21:54.217782] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:47.738 Running I/O for 1 seconds... 00:26:47.738 00:26:47.738 Latency(us) 00:26:47.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.738 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:47.738 NVMe0n1 : 1.01 18683.53 72.98 0.00 0.00 6841.11 5898.24 15922.82 00:26:47.738 =================================================================================================================== 00:26:47.738 Total : 18683.53 72.98 0.00 0.00 6841.11 5898.24 15922.82 00:26:47.738 Received shutdown signal, test time was about 1.000000 seconds 00:26:47.738 00:26:47.738 Latency(us) 00:26:47.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.739 =================================================================================================================== 00:26:47.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.739 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.739 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.739 rmmod nvme_tcp 00:26:47.739 rmmod nvme_fabrics 00:26:47.739 rmmod nvme_keyring 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1077456 ']' 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1077456 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1077456 ']' 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1077456 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077456 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077456' 00:26:47.995 killing process with pid 1077456 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1077456 00:26:47.995 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1077456 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.252 12:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.151 12:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.151 00:26:50.151 real 0m7.115s 00:26:50.151 user 0m10.671s 00:26:50.151 sys 0m2.228s 00:26:50.151 12:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.151 12:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.151 ************************************ 00:26:50.151 END TEST nvmf_multicontroller 00:26:50.151 ************************************ 00:26:50.151 12:21:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:50.151 12:21:58 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:50.151 12:21:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:50.151 12:21:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.151 12:21:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.409 ************************************ 00:26:50.409 START TEST nvmf_aer 00:26:50.409 ************************************ 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:50.409 * Looking for test storage... 00:26:50.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.409 12:21:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:52.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:52.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:52.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:52.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.306 12:21:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:26:52.307 00:26:52.307 --- 10.0.0.2 ping statistics --- 00:26:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.307 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:26:52.307 00:26:52.307 --- 10.0.0.1 ping statistics --- 00:26:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.307 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1079685 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1079685 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1079685 ']' 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.307 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.307 [2024-07-22 12:22:00.189404] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:52.307 [2024-07-22 12:22:00.189489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.307 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.307 [2024-07-22 12:22:00.226740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:52.564 [2024-07-22 12:22:00.258765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.564 [2024-07-22 12:22:00.349997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.564 [2024-07-22 12:22:00.350060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.564 [2024-07-22 12:22:00.350076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.564 [2024-07-22 12:22:00.350089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.564 [2024-07-22 12:22:00.350100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.564 [2024-07-22 12:22:00.350184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.564 [2024-07-22 12:22:00.350251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.564 [2024-07-22 12:22:00.350281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.564 [2024-07-22 12:22:00.350283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.564 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.564 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:26:52.564 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.564 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.564 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 [2024-07-22 12:22:00.504553] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 Malloc0 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 [2024-07-22 12:22:00.558121] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.822 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 [ 00:26:52.822 { 00:26:52.822 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:52.822 "subtype": "Discovery", 00:26:52.822 "listen_addresses": [], 00:26:52.822 "allow_any_host": true, 00:26:52.822 "hosts": [] 00:26:52.822 }, 00:26:52.822 { 00:26:52.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.822 "subtype": "NVMe", 00:26:52.822 "listen_addresses": [ 00:26:52.822 { 00:26:52.822 "trtype": "TCP", 00:26:52.822 "adrfam": "IPv4", 00:26:52.822 "traddr": "10.0.0.2", 00:26:52.822 "trsvcid": "4420" 00:26:52.822 } 00:26:52.822 ], 00:26:52.822 "allow_any_host": true, 00:26:52.822 "hosts": [], 00:26:52.822 "serial_number": "SPDK00000000000001", 00:26:52.822 "model_number": "SPDK bdev Controller", 00:26:52.822 "max_namespaces": 2, 00:26:52.822 "min_cntlid": 1, 00:26:52.823 "max_cntlid": 65519, 00:26:52.823 "namespaces": [ 00:26:52.823 { 00:26:52.823 "nsid": 1, 00:26:52.823 "bdev_name": "Malloc0", 00:26:52.823 "name": "Malloc0", 00:26:52.823 "nguid": "B3C99FA956DA4A16A78A6A6F9BDE7EAF", 00:26:52.823 "uuid": "b3c99fa9-56da-4a16-a78a-6a6f9bde7eaf" 00:26:52.823 } 00:26:52.823 ] 00:26:52.823 } 00:26:52.823 ] 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1079714 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:52.823 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:52.823 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.084 Malloc1 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.084 Asynchronous Event Request test 00:26:53.084 Attaching to 10.0.0.2 00:26:53.084 Attached to 10.0.0.2 00:26:53.084 Registering asynchronous event callbacks... 00:26:53.084 Starting namespace attribute notice tests for all controllers... 00:26:53.084 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:53.084 aer_cb - Changed Namespace 00:26:53.084 Cleaning up... 00:26:53.084 [ 00:26:53.084 { 00:26:53.084 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:53.084 "subtype": "Discovery", 00:26:53.084 "listen_addresses": [], 00:26:53.084 "allow_any_host": true, 00:26:53.084 "hosts": [] 00:26:53.084 }, 00:26:53.084 { 00:26:53.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.084 "subtype": "NVMe", 00:26:53.084 "listen_addresses": [ 00:26:53.084 { 00:26:53.084 "trtype": "TCP", 00:26:53.084 "adrfam": "IPv4", 00:26:53.084 "traddr": "10.0.0.2", 00:26:53.084 "trsvcid": "4420" 00:26:53.084 } 00:26:53.084 ], 00:26:53.084 "allow_any_host": true, 00:26:53.084 "hosts": [], 00:26:53.084 "serial_number": "SPDK00000000000001", 00:26:53.084 "model_number": "SPDK bdev Controller", 00:26:53.084 "max_namespaces": 2, 00:26:53.084 "min_cntlid": 1, 00:26:53.084 "max_cntlid": 65519, 00:26:53.084 "namespaces": [ 00:26:53.084 { 00:26:53.084 "nsid": 1, 00:26:53.084 "bdev_name": "Malloc0", 00:26:53.084 "name": "Malloc0", 00:26:53.084 "nguid": "B3C99FA956DA4A16A78A6A6F9BDE7EAF", 00:26:53.084 "uuid": "b3c99fa9-56da-4a16-a78a-6a6f9bde7eaf" 00:26:53.084 }, 00:26:53.084 { 00:26:53.084 "nsid": 2, 00:26:53.084 "bdev_name": "Malloc1", 00:26:53.084 "name": "Malloc1", 00:26:53.084 "nguid": "3C5AC54929C042088E8BA4BCD4242617", 00:26:53.084 "uuid": "3c5ac549-29c0-4208-8e8b-a4bcd4242617" 00:26:53.084 } 00:26:53.084 ] 00:26:53.084 } 00:26:53.084 ] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1079714 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:53.084 rmmod nvme_tcp 00:26:53.084 rmmod nvme_fabrics 00:26:53.084 rmmod nvme_keyring 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1079685 ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1079685 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1079685 ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1079685 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1079685 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1079685' 00:26:53.084 killing process with pid 1079685 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1079685 00:26:53.084 12:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1079685 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.343 12:22:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.869 12:22:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:55.869 00:26:55.869 real 0m5.147s 00:26:55.870 user 0m3.984s 00:26:55.870 sys 0m1.763s 00:26:55.870 12:22:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:55.870 12:22:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:55.870 ************************************ 00:26:55.870 END TEST nvmf_aer 00:26:55.870 ************************************ 00:26:55.870 12:22:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:55.870 12:22:03 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:55.870 12:22:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:55.870 12:22:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.870 12:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.870 ************************************ 00:26:55.870 START TEST nvmf_async_init 00:26:55.870 ************************************ 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:55.870 * Looking for test storage... 00:26:55.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b8a691f0e2ce4e6aa33384b56fd00b29 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.870 12:22:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:57.766 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:57.766 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:57.766 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:57.766 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.766 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:26:57.767 00:26:57.767 --- 10.0.0.2 ping statistics --- 00:26:57.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.767 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:26:57.767 00:26:57.767 --- 10.0.0.1 ping statistics --- 00:26:57.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.767 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1081764 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1081764 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1081764 ']' 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.767 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.767 [2024-07-22 12:22:05.579330] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:26:57.767 [2024-07-22 12:22:05.579416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.767 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.767 [2024-07-22 12:22:05.625410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:57.767 [2024-07-22 12:22:05.651496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.025 [2024-07-22 12:22:05.740271] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.025 [2024-07-22 12:22:05.740322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.025 [2024-07-22 12:22:05.740344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.025 [2024-07-22 12:22:05.740355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.025 [2024-07-22 12:22:05.740365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.025 [2024-07-22 12:22:05.740388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.025 [2024-07-22 12:22:05.882306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.025 null0 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.025 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b8a691f0e2ce4e6aa33384b56fd00b29 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.026 [2024-07-22 12:22:05.922590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.026 12:22:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.283 nvme0n1 00:26:58.283 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.283 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:58.283 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.283 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.283 [ 00:26:58.283 { 00:26:58.283 "name": "nvme0n1", 00:26:58.283 "aliases": [ 00:26:58.283 "b8a691f0-e2ce-4e6a-a333-84b56fd00b29" 00:26:58.283 ], 00:26:58.283 "product_name": "NVMe disk", 00:26:58.283 "block_size": 512, 00:26:58.283 "num_blocks": 2097152, 00:26:58.283 "uuid": "b8a691f0-e2ce-4e6a-a333-84b56fd00b29", 00:26:58.283 "assigned_rate_limits": { 00:26:58.283 "rw_ios_per_sec": 0, 00:26:58.283 "rw_mbytes_per_sec": 0, 00:26:58.283 "r_mbytes_per_sec": 0, 00:26:58.284 "w_mbytes_per_sec": 0 00:26:58.284 }, 00:26:58.284 "claimed": false, 00:26:58.284 "zoned": false, 00:26:58.284 "supported_io_types": { 00:26:58.284 "read": true, 00:26:58.284 "write": true, 00:26:58.284 "unmap": false, 00:26:58.284 "flush": true, 00:26:58.284 "reset": true, 00:26:58.284 "nvme_admin": true, 00:26:58.284 "nvme_io": true, 00:26:58.284 "nvme_io_md": false, 00:26:58.284 "write_zeroes": true, 00:26:58.284 "zcopy": false, 00:26:58.284 "get_zone_info": false, 00:26:58.284 "zone_management": false, 00:26:58.284 "zone_append": false, 00:26:58.284 "compare": true, 00:26:58.284 "compare_and_write": true, 00:26:58.284 "abort": true, 00:26:58.284 "seek_hole": false, 00:26:58.284 "seek_data": false, 00:26:58.284 "copy": true, 00:26:58.284 "nvme_iov_md": false 00:26:58.284 }, 00:26:58.284 "memory_domains": [ 00:26:58.284 { 00:26:58.284 "dma_device_id": "system", 00:26:58.284 "dma_device_type": 1 00:26:58.284 } 00:26:58.284 ], 00:26:58.284 "driver_specific": { 00:26:58.284 "nvme": [ 00:26:58.284 { 00:26:58.284 "trid": { 00:26:58.284 "trtype": "TCP", 00:26:58.284 "adrfam": "IPv4", 00:26:58.284 "traddr": "10.0.0.2", 00:26:58.284 "trsvcid": "4420", 00:26:58.284 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:58.284 }, 00:26:58.284 "ctrlr_data": { 00:26:58.284 "cntlid": 1, 00:26:58.284 "vendor_id": "0x8086", 00:26:58.284 "model_number": "SPDK bdev Controller", 00:26:58.284 "serial_number": "00000000000000000000", 00:26:58.284 "firmware_revision": "24.09", 00:26:58.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.284 "oacs": { 00:26:58.284 "security": 0, 00:26:58.284 "format": 0, 00:26:58.284 "firmware": 0, 00:26:58.284 "ns_manage": 0 00:26:58.284 }, 00:26:58.284 "multi_ctrlr": true, 00:26:58.284 "ana_reporting": false 00:26:58.284 }, 00:26:58.284 "vs": { 00:26:58.284 "nvme_version": "1.3" 00:26:58.284 }, 00:26:58.284 "ns_data": { 00:26:58.284 "id": 1, 00:26:58.284 "can_share": true 00:26:58.284 } 00:26:58.284 } 00:26:58.284 ], 00:26:58.284 "mp_policy": "active_passive" 00:26:58.284 } 00:26:58.284 } 00:26:58.284 ] 00:26:58.284 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.284 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:58.284 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.284 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.284 [2024-07-22 12:22:06.175755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:58.284 [2024-07-22 12:22:06.175848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb850 (9): Bad file descriptor 00:26:58.542 [2024-07-22 12:22:06.348772] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.542 [ 00:26:58.542 { 00:26:58.542 "name": "nvme0n1", 00:26:58.542 "aliases": [ 00:26:58.542 "b8a691f0-e2ce-4e6a-a333-84b56fd00b29" 00:26:58.542 ], 00:26:58.542 "product_name": "NVMe disk", 00:26:58.542 "block_size": 512, 00:26:58.542 "num_blocks": 2097152, 00:26:58.542 "uuid": "b8a691f0-e2ce-4e6a-a333-84b56fd00b29", 00:26:58.542 "assigned_rate_limits": { 00:26:58.542 "rw_ios_per_sec": 0, 00:26:58.542 "rw_mbytes_per_sec": 0, 00:26:58.542 "r_mbytes_per_sec": 0, 00:26:58.542 "w_mbytes_per_sec": 0 00:26:58.542 }, 00:26:58.542 "claimed": false, 00:26:58.542 "zoned": false, 00:26:58.542 "supported_io_types": { 00:26:58.542 "read": true, 00:26:58.542 "write": true, 00:26:58.542 "unmap": false, 00:26:58.542 "flush": true, 00:26:58.542 "reset": true, 00:26:58.542 "nvme_admin": true, 00:26:58.542 "nvme_io": true, 00:26:58.542 "nvme_io_md": false, 00:26:58.542 "write_zeroes": true, 00:26:58.542 "zcopy": false, 00:26:58.542 "get_zone_info": false, 00:26:58.542 "zone_management": false, 00:26:58.542 "zone_append": false, 00:26:58.542 "compare": true, 00:26:58.542 "compare_and_write": true, 00:26:58.542 "abort": true, 00:26:58.542 "seek_hole": false, 00:26:58.542 "seek_data": false, 00:26:58.542 "copy": true, 00:26:58.542 "nvme_iov_md": false 00:26:58.542 }, 00:26:58.542 "memory_domains": [ 00:26:58.542 { 00:26:58.542 "dma_device_id": "system", 00:26:58.542 "dma_device_type": 1 00:26:58.542 } 00:26:58.542 ], 00:26:58.542 "driver_specific": { 00:26:58.542 "nvme": [ 00:26:58.542 { 00:26:58.542 "trid": { 00:26:58.542 "trtype": "TCP", 00:26:58.542 "adrfam": "IPv4", 00:26:58.542 "traddr": "10.0.0.2", 00:26:58.542 "trsvcid": "4420", 00:26:58.542 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:58.542 }, 00:26:58.542 "ctrlr_data": { 00:26:58.542 "cntlid": 2, 00:26:58.542 "vendor_id": "0x8086", 00:26:58.542 "model_number": "SPDK bdev Controller", 00:26:58.542 "serial_number": "00000000000000000000", 00:26:58.542 "firmware_revision": "24.09", 00:26:58.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.542 "oacs": { 00:26:58.542 "security": 0, 00:26:58.542 "format": 0, 00:26:58.542 "firmware": 0, 00:26:58.542 "ns_manage": 0 00:26:58.542 }, 00:26:58.542 "multi_ctrlr": true, 00:26:58.542 "ana_reporting": false 00:26:58.542 }, 00:26:58.542 "vs": { 00:26:58.542 "nvme_version": "1.3" 00:26:58.542 }, 00:26:58.542 "ns_data": { 00:26:58.542 "id": 1, 00:26:58.542 "can_share": true 00:26:58.542 } 00:26:58.542 } 00:26:58.542 ], 00:26:58.542 "mp_policy": "active_passive" 00:26:58.542 } 00:26:58.542 } 00:26:58.542 ] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2O8J5s8OsM 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2O8J5s8OsM 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.542 [2024-07-22 12:22:06.400534] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:58.542 [2024-07-22 12:22:06.400671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2O8J5s8OsM 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.542 [2024-07-22 12:22:06.408552] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2O8J5s8OsM 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.542 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.542 [2024-07-22 12:22:06.416587] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:58.542 [2024-07-22 12:22:06.416658] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:58.800 nvme0n1 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.800 [ 00:26:58.800 { 00:26:58.800 "name": "nvme0n1", 00:26:58.800 "aliases": [ 00:26:58.800 "b8a691f0-e2ce-4e6a-a333-84b56fd00b29" 00:26:58.800 ], 00:26:58.800 "product_name": "NVMe disk", 00:26:58.800 "block_size": 512, 00:26:58.800 "num_blocks": 2097152, 00:26:58.800 "uuid": "b8a691f0-e2ce-4e6a-a333-84b56fd00b29", 00:26:58.800 "assigned_rate_limits": { 00:26:58.800 "rw_ios_per_sec": 0, 00:26:58.800 "rw_mbytes_per_sec": 0, 00:26:58.800 "r_mbytes_per_sec": 0, 00:26:58.800 "w_mbytes_per_sec": 0 00:26:58.800 }, 00:26:58.800 "claimed": false, 00:26:58.800 "zoned": false, 00:26:58.800 "supported_io_types": { 00:26:58.800 "read": true, 00:26:58.800 "write": true, 00:26:58.800 "unmap": false, 00:26:58.800 "flush": true, 00:26:58.800 "reset": true, 00:26:58.800 "nvme_admin": true, 00:26:58.800 "nvme_io": true, 00:26:58.800 "nvme_io_md": false, 00:26:58.800 "write_zeroes": true, 00:26:58.800 "zcopy": false, 00:26:58.800 "get_zone_info": false, 00:26:58.800 "zone_management": false, 00:26:58.800 "zone_append": false, 00:26:58.800 "compare": true, 00:26:58.800 "compare_and_write": true, 00:26:58.800 "abort": true, 00:26:58.800 "seek_hole": false, 00:26:58.800 "seek_data": false, 00:26:58.800 "copy": true, 00:26:58.800 "nvme_iov_md": false 00:26:58.800 }, 00:26:58.800 "memory_domains": [ 00:26:58.800 { 00:26:58.800 "dma_device_id": "system", 00:26:58.800 "dma_device_type": 1 00:26:58.800 } 00:26:58.800 ], 00:26:58.800 "driver_specific": { 00:26:58.800 "nvme": [ 00:26:58.800 { 00:26:58.800 "trid": { 00:26:58.800 "trtype": "TCP", 00:26:58.800 "adrfam": "IPv4", 00:26:58.800 "traddr": "10.0.0.2", 00:26:58.800 "trsvcid": "4421", 00:26:58.800 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:58.800 }, 00:26:58.800 "ctrlr_data": { 00:26:58.800 "cntlid": 3, 00:26:58.800 "vendor_id": "0x8086", 00:26:58.800 "model_number": "SPDK bdev Controller", 00:26:58.800 "serial_number": "00000000000000000000", 00:26:58.800 "firmware_revision": "24.09", 00:26:58.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.800 "oacs": { 00:26:58.800 "security": 0, 00:26:58.800 "format": 0, 00:26:58.800 "firmware": 0, 00:26:58.800 "ns_manage": 0 00:26:58.800 }, 00:26:58.800 "multi_ctrlr": true, 00:26:58.800 "ana_reporting": false 00:26:58.800 }, 00:26:58.800 "vs": { 00:26:58.800 "nvme_version": "1.3" 00:26:58.800 }, 00:26:58.800 "ns_data": { 00:26:58.800 "id": 1, 00:26:58.800 "can_share": true 00:26:58.800 } 00:26:58.800 } 00:26:58.800 ], 00:26:58.800 "mp_policy": "active_passive" 00:26:58.800 } 00:26:58.800 } 00:26:58.800 ] 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2O8J5s8OsM 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.800 rmmod nvme_tcp 00:26:58.800 rmmod nvme_fabrics 00:26:58.800 rmmod nvme_keyring 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1081764 ']' 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1081764 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1081764 ']' 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1081764 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1081764 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1081764' 00:26:58.800 killing process with pid 1081764 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1081764 00:26:58.800 [2024-07-22 12:22:06.626578] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:58.800 [2024-07-22 12:22:06.626625] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:58.800 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1081764 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.058 12:22:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.958 12:22:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:00.958 00:27:00.958 real 0m5.590s 00:27:00.958 user 0m2.147s 00:27:00.958 sys 0m1.832s 00:27:00.958 12:22:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:00.958 12:22:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.958 ************************************ 00:27:00.958 END TEST nvmf_async_init 00:27:00.958 ************************************ 00:27:01.217 12:22:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.217 12:22:08 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:01.217 12:22:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.217 12:22:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.217 12:22:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.217 ************************************ 00:27:01.217 START TEST dma 00:27:01.217 ************************************ 00:27:01.217 12:22:08 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:01.217 * Looking for test storage... 00:27:01.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.217 12:22:08 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.217 12:22:08 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.217 12:22:08 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.217 12:22:08 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.217 12:22:08 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:08 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:08 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:08 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:01.217 12:22:08 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.217 12:22:08 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.217 12:22:08 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:01.217 12:22:08 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:01.217 00:27:01.217 real 0m0.065s 00:27:01.217 user 0m0.029s 00:27:01.217 sys 0m0.041s 00:27:01.217 12:22:08 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.217 12:22:09 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:01.217 ************************************ 00:27:01.217 END TEST dma 00:27:01.217 ************************************ 00:27:01.217 12:22:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.217 12:22:09 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:01.217 12:22:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.217 12:22:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.217 12:22:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.217 ************************************ 00:27:01.217 START TEST nvmf_identify 00:27:01.217 ************************************ 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:01.217 * Looking for test storage... 00:27:01.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.217 12:22:09 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.218 12:22:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.111 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:03.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:03.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:03.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:03.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.112 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:27:03.369 00:27:03.369 --- 10.0.0.2 ping statistics --- 00:27:03.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.369 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:27:03.369 00:27:03.369 --- 10.0.0.1 ping statistics --- 00:27:03.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.369 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1083886 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1083886 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1083886 ']' 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.369 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.369 [2024-07-22 12:22:11.244298] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:27:03.369 [2024-07-22 12:22:11.244369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.369 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.369 [2024-07-22 12:22:11.283904] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:03.626 [2024-07-22 12:22:11.315376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.626 [2024-07-22 12:22:11.408399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.626 [2024-07-22 12:22:11.408467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.626 [2024-07-22 12:22:11.408484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.626 [2024-07-22 12:22:11.408501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.626 [2024-07-22 12:22:11.408514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.626 [2024-07-22 12:22:11.411637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.626 [2024-07-22 12:22:11.411683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.626 [2024-07-22 12:22:11.411760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.626 [2024-07-22 12:22:11.411764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.626 [2024-07-22 12:22:11.538215] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.626 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 Malloc0 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 [2024-07-22 12:22:11.609075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.886 [ 00:27:03.886 { 00:27:03.886 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:03.886 "subtype": "Discovery", 00:27:03.886 "listen_addresses": [ 00:27:03.886 { 00:27:03.886 "trtype": "TCP", 00:27:03.886 "adrfam": "IPv4", 00:27:03.886 "traddr": "10.0.0.2", 00:27:03.886 "trsvcid": "4420" 00:27:03.886 } 00:27:03.886 ], 00:27:03.886 "allow_any_host": true, 00:27:03.886 "hosts": [] 00:27:03.886 }, 00:27:03.886 { 00:27:03.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:03.886 "subtype": "NVMe", 00:27:03.886 "listen_addresses": [ 00:27:03.886 { 00:27:03.886 "trtype": "TCP", 00:27:03.886 "adrfam": "IPv4", 00:27:03.886 "traddr": "10.0.0.2", 00:27:03.886 "trsvcid": "4420" 00:27:03.886 } 00:27:03.886 ], 00:27:03.886 "allow_any_host": true, 00:27:03.886 "hosts": [], 00:27:03.886 "serial_number": "SPDK00000000000001", 00:27:03.886 "model_number": "SPDK bdev Controller", 00:27:03.886 "max_namespaces": 32, 00:27:03.886 "min_cntlid": 1, 00:27:03.886 "max_cntlid": 65519, 00:27:03.886 "namespaces": [ 00:27:03.886 { 00:27:03.886 "nsid": 1, 00:27:03.886 "bdev_name": "Malloc0", 00:27:03.886 "name": "Malloc0", 00:27:03.886 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:03.886 "eui64": "ABCDEF0123456789", 00:27:03.886 "uuid": "dd91c931-f776-4cce-9f89-f319ece6d1ad" 00:27:03.886 } 00:27:03.886 ] 00:27:03.886 } 00:27:03.886 ] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.886 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:03.886 [2024-07-22 12:22:11.646530] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:27:03.886 [2024-07-22 12:22:11.646567] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083919 ] 00:27:03.886 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.886 [2024-07-22 12:22:11.663306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:03.886 [2024-07-22 12:22:11.680855] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:03.886 [2024-07-22 12:22:11.680915] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:03.886 [2024-07-22 12:22:11.680940] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:03.886 [2024-07-22 12:22:11.680958] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:03.886 [2024-07-22 12:22:11.680968] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:03.886 [2024-07-22 12:22:11.681305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:03.886 [2024-07-22 12:22:11.681365] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xda6630 0 00:27:03.886 [2024-07-22 12:22:11.695622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:03.886 [2024-07-22 12:22:11.695643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:03.886 [2024-07-22 12:22:11.695653] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:03.886 [2024-07-22 12:22:11.695659] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:03.886 [2024-07-22 12:22:11.695726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.695739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.695747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.886 [2024-07-22 12:22:11.695766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:03.886 [2024-07-22 12:22:11.695794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.886 [2024-07-22 12:22:11.703628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.886 [2024-07-22 12:22:11.703645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.886 [2024-07-22 12:22:11.703653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.703660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.886 [2024-07-22 12:22:11.703680] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:03.886 [2024-07-22 12:22:11.703708] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:03.886 [2024-07-22 12:22:11.703718] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:03.886 [2024-07-22 12:22:11.703748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.703757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.703764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.886 [2024-07-22 12:22:11.703775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.886 [2024-07-22 12:22:11.703799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.886 [2024-07-22 12:22:11.703956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.886 [2024-07-22 12:22:11.703969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.886 [2024-07-22 12:22:11.703976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.703983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.886 [2024-07-22 12:22:11.703996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:03.886 [2024-07-22 12:22:11.704010] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:03.886 [2024-07-22 12:22:11.704022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.704030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.704036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.886 [2024-07-22 12:22:11.704047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.886 [2024-07-22 12:22:11.704068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.886 [2024-07-22 12:22:11.704172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.886 [2024-07-22 12:22:11.704184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.886 [2024-07-22 12:22:11.704191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.704197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.886 [2024-07-22 12:22:11.704206] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:03.886 [2024-07-22 12:22:11.704220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:03.886 [2024-07-22 12:22:11.704232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.704239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.886 [2024-07-22 12:22:11.704246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.886 [2024-07-22 12:22:11.704256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.886 [2024-07-22 12:22:11.704277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.886 [2024-07-22 12:22:11.704391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.704406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.704413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.704420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.704429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:03.887 [2024-07-22 12:22:11.704446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.704455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.704466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.704478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.887 [2024-07-22 12:22:11.704500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.887 [2024-07-22 12:22:11.704604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.704623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.704631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.704638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.704646] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:03.887 [2024-07-22 12:22:11.704655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:03.887 [2024-07-22 12:22:11.704668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:03.887 [2024-07-22 12:22:11.704778] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:03.887 [2024-07-22 12:22:11.704786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:03.887 [2024-07-22 12:22:11.704800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.704808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.704814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.704824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.887 [2024-07-22 12:22:11.704846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.887 [2024-07-22 12:22:11.704992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.705008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.705015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.705030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:03.887 [2024-07-22 12:22:11.705046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.705072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.887 [2024-07-22 12:22:11.705093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.887 [2024-07-22 12:22:11.705198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.705213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.705220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.705235] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:03.887 [2024-07-22 12:22:11.705244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:03.887 [2024-07-22 12:22:11.705261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:03.887 [2024-07-22 12:22:11.705280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:03.887 [2024-07-22 12:22:11.705297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.705316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.887 [2024-07-22 12:22:11.705337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.887 [2024-07-22 12:22:11.705498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.887 [2024-07-22 12:22:11.705510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.887 [2024-07-22 12:22:11.705517] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705525] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda6630): datao=0, datal=4096, cccid=0 00:27:03.887 [2024-07-22 12:22:11.705533] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf4f80) on tqpair(0xda6630): expected_datao=0, payload_size=4096 00:27:03.887 [2024-07-22 12:22:11.705541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705552] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705561] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.705587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.705594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.705622] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:03.887 [2024-07-22 12:22:11.705633] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:03.887 [2024-07-22 12:22:11.705641] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:03.887 [2024-07-22 12:22:11.705650] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:03.887 [2024-07-22 12:22:11.705658] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:03.887 [2024-07-22 12:22:11.705666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:03.887 [2024-07-22 12:22:11.705682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:03.887 [2024-07-22 12:22:11.705694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.705719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:03.887 [2024-07-22 12:22:11.705741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.887 [2024-07-22 12:22:11.705862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.705874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.705885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.705905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.705929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.887 [2024-07-22 12:22:11.705939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.705961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.887 [2024-07-22 12:22:11.705971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.705984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.705993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.887 [2024-07-22 12:22:11.706003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.706009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.706031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.706041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.887 [2024-07-22 12:22:11.706049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:03.887 [2024-07-22 12:22:11.706068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:03.887 [2024-07-22 12:22:11.706081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.706088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda6630) 00:27:03.887 [2024-07-22 12:22:11.706098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.887 [2024-07-22 12:22:11.706120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf4f80, cid 0, qid 0 00:27:03.887 [2024-07-22 12:22:11.706146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5100, cid 1, qid 0 00:27:03.887 [2024-07-22 12:22:11.706155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5280, cid 2, qid 0 00:27:03.887 [2024-07-22 12:22:11.706163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:03.887 [2024-07-22 12:22:11.706171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5580, cid 4, qid 0 00:27:03.887 [2024-07-22 12:22:11.706341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.887 [2024-07-22 12:22:11.706357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.887 [2024-07-22 12:22:11.706363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.887 [2024-07-22 12:22:11.706370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5580) on tqpair=0xda6630 00:27:03.887 [2024-07-22 12:22:11.706379] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:03.888 [2024-07-22 12:22:11.706388] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:03.888 [2024-07-22 12:22:11.706409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.706420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda6630) 00:27:03.888 [2024-07-22 12:22:11.706430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.888 [2024-07-22 12:22:11.706466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5580, cid 4, qid 0 00:27:03.888 [2024-07-22 12:22:11.706666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.888 [2024-07-22 12:22:11.706682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.888 [2024-07-22 12:22:11.706689] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.706696] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda6630): datao=0, datal=4096, cccid=4 00:27:03.888 [2024-07-22 12:22:11.706704] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf5580) on tqpair(0xda6630): expected_datao=0, payload_size=4096 00:27:03.888 [2024-07-22 12:22:11.706712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.706728] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.706737] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.747741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.888 [2024-07-22 12:22:11.747760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.888 [2024-07-22 12:22:11.747767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.747774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5580) on tqpair=0xda6630 00:27:03.888 [2024-07-22 12:22:11.747794] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:03.888 [2024-07-22 12:22:11.747832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.747843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda6630) 00:27:03.888 [2024-07-22 12:22:11.747854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.888 [2024-07-22 12:22:11.747866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.747874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.747880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda6630) 00:27:03.888 [2024-07-22 12:22:11.747889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.888 [2024-07-22 12:22:11.747917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5580, cid 4, qid 0 00:27:03.888 [2024-07-22 12:22:11.747929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5700, cid 5, qid 0 00:27:03.888 [2024-07-22 12:22:11.748079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.888 [2024-07-22 12:22:11.748092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.888 [2024-07-22 12:22:11.748099] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.748106] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda6630): datao=0, datal=1024, cccid=4 00:27:03.888 [2024-07-22 12:22:11.748113] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf5580) on tqpair(0xda6630): expected_datao=0, payload_size=1024 00:27:03.888 [2024-07-22 12:22:11.748121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.748131] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.748138] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.748147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.888 [2024-07-22 12:22:11.748160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.888 [2024-07-22 12:22:11.748168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.748175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5700) on tqpair=0xda6630 00:27:03.888 [2024-07-22 12:22:11.788751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.888 [2024-07-22 12:22:11.788770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.888 [2024-07-22 12:22:11.788778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.788785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5580) on tqpair=0xda6630 00:27:03.888 [2024-07-22 12:22:11.788803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.788813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda6630) 00:27:03.888 [2024-07-22 12:22:11.788824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.888 [2024-07-22 12:22:11.788854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5580, cid 4, qid 0 00:27:03.888 [2024-07-22 12:22:11.788988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.888 [2024-07-22 12:22:11.789003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.888 [2024-07-22 12:22:11.789010] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789016] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda6630): datao=0, datal=3072, cccid=4 00:27:03.888 [2024-07-22 12:22:11.789024] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf5580) on tqpair(0xda6630): expected_datao=0, payload_size=3072 00:27:03.888 [2024-07-22 12:22:11.789032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789042] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789050] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:03.888 [2024-07-22 12:22:11.789071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:03.888 [2024-07-22 12:22:11.789078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5580) on tqpair=0xda6630 00:27:03.888 [2024-07-22 12:22:11.789100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda6630) 00:27:03.888 [2024-07-22 12:22:11.789119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.888 [2024-07-22 12:22:11.789147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5580, cid 4, qid 0 00:27:03.888 [2024-07-22 12:22:11.789274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:03.888 [2024-07-22 12:22:11.789287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:03.888 [2024-07-22 12:22:11.789294] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789300] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda6630): datao=0, datal=8, cccid=4 00:27:03.888 [2024-07-22 12:22:11.789308] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf5580) on tqpair(0xda6630): expected_datao=0, payload_size=8 00:27:03.888 [2024-07-22 12:22:11.789315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789325] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:03.888 [2024-07-22 12:22:11.789332] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.164 [2024-07-22 12:22:11.832629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.164 [2024-07-22 12:22:11.832647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.165 [2024-07-22 12:22:11.832677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.165 [2024-07-22 12:22:11.832685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5580) on tqpair=0xda6630 00:27:04.165 ===================================================== 00:27:04.165 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:04.165 ===================================================== 00:27:04.165 Controller Capabilities/Features 00:27:04.165 ================================ 00:27:04.165 Vendor ID: 0000 00:27:04.165 Subsystem Vendor ID: 0000 00:27:04.165 Serial Number: .................... 00:27:04.165 Model Number: ........................................ 00:27:04.165 Firmware Version: 24.09 00:27:04.165 Recommended Arb Burst: 0 00:27:04.165 IEEE OUI Identifier: 00 00 00 00:27:04.165 Multi-path I/O 00:27:04.165 May have multiple subsystem ports: No 00:27:04.165 May have multiple controllers: No 00:27:04.165 Associated with SR-IOV VF: No 00:27:04.165 Max Data Transfer Size: 131072 00:27:04.165 Max Number of Namespaces: 0 00:27:04.165 Max Number of I/O Queues: 1024 00:27:04.165 NVMe Specification Version (VS): 1.3 00:27:04.165 NVMe Specification Version (Identify): 1.3 00:27:04.165 Maximum Queue Entries: 128 00:27:04.165 Contiguous Queues Required: Yes 00:27:04.165 Arbitration Mechanisms Supported 00:27:04.165 Weighted Round Robin: Not Supported 00:27:04.165 Vendor Specific: Not Supported 00:27:04.165 Reset Timeout: 15000 ms 00:27:04.165 Doorbell Stride: 4 bytes 00:27:04.165 NVM Subsystem Reset: Not Supported 00:27:04.165 Command Sets Supported 00:27:04.165 NVM Command Set: Supported 00:27:04.165 Boot Partition: Not Supported 00:27:04.165 Memory Page Size Minimum: 4096 bytes 00:27:04.165 Memory Page Size Maximum: 4096 bytes 00:27:04.165 Persistent Memory Region: Not Supported 00:27:04.165 Optional Asynchronous Events Supported 00:27:04.165 Namespace Attribute Notices: Not Supported 00:27:04.165 Firmware Activation Notices: Not Supported 00:27:04.165 ANA Change Notices: Not Supported 00:27:04.165 PLE Aggregate Log Change Notices: Not Supported 00:27:04.165 LBA Status Info Alert Notices: Not Supported 00:27:04.165 EGE Aggregate Log Change Notices: Not Supported 00:27:04.165 Normal NVM Subsystem Shutdown event: Not Supported 00:27:04.165 Zone Descriptor Change Notices: Not Supported 00:27:04.165 Discovery Log Change Notices: Supported 00:27:04.165 Controller Attributes 00:27:04.165 128-bit Host Identifier: Not Supported 00:27:04.165 Non-Operational Permissive Mode: Not Supported 00:27:04.165 NVM Sets: Not Supported 00:27:04.165 Read Recovery Levels: Not Supported 00:27:04.165 Endurance Groups: Not Supported 00:27:04.165 Predictable Latency Mode: Not Supported 00:27:04.165 Traffic Based Keep ALive: Not Supported 00:27:04.165 Namespace Granularity: Not Supported 00:27:04.165 SQ Associations: Not Supported 00:27:04.165 UUID List: Not Supported 00:27:04.165 Multi-Domain Subsystem: Not Supported 00:27:04.165 Fixed Capacity Management: Not Supported 00:27:04.165 Variable Capacity Management: Not Supported 00:27:04.165 Delete Endurance Group: Not Supported 00:27:04.165 Delete NVM Set: Not Supported 00:27:04.165 Extended LBA Formats Supported: Not Supported 00:27:04.165 Flexible Data Placement Supported: Not Supported 00:27:04.165 00:27:04.165 Controller Memory Buffer Support 00:27:04.165 ================================ 00:27:04.165 Supported: No 00:27:04.165 00:27:04.165 Persistent Memory Region Support 00:27:04.165 ================================ 00:27:04.165 Supported: No 00:27:04.165 00:27:04.165 Admin Command Set Attributes 00:27:04.165 ============================ 00:27:04.165 Security Send/Receive: Not Supported 00:27:04.165 Format NVM: Not Supported 00:27:04.165 Firmware Activate/Download: Not Supported 00:27:04.165 Namespace Management: Not Supported 00:27:04.165 Device Self-Test: Not Supported 00:27:04.165 Directives: Not Supported 00:27:04.165 NVMe-MI: Not Supported 00:27:04.165 Virtualization Management: Not Supported 00:27:04.165 Doorbell Buffer Config: Not Supported 00:27:04.165 Get LBA Status Capability: Not Supported 00:27:04.165 Command & Feature Lockdown Capability: Not Supported 00:27:04.165 Abort Command Limit: 1 00:27:04.165 Async Event Request Limit: 4 00:27:04.165 Number of Firmware Slots: N/A 00:27:04.165 Firmware Slot 1 Read-Only: N/A 00:27:04.165 Firmware Activation Without Reset: N/A 00:27:04.165 Multiple Update Detection Support: N/A 00:27:04.165 Firmware Update Granularity: No Information Provided 00:27:04.165 Per-Namespace SMART Log: No 00:27:04.165 Asymmetric Namespace Access Log Page: Not Supported 00:27:04.165 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:04.165 Command Effects Log Page: Not Supported 00:27:04.165 Get Log Page Extended Data: Supported 00:27:04.165 Telemetry Log Pages: Not Supported 00:27:04.165 Persistent Event Log Pages: Not Supported 00:27:04.165 Supported Log Pages Log Page: May Support 00:27:04.165 Commands Supported & Effects Log Page: Not Supported 00:27:04.165 Feature Identifiers & Effects Log Page:May Support 00:27:04.165 NVMe-MI Commands & Effects Log Page: May Support 00:27:04.165 Data Area 4 for Telemetry Log: Not Supported 00:27:04.165 Error Log Page Entries Supported: 128 00:27:04.165 Keep Alive: Not Supported 00:27:04.165 00:27:04.165 NVM Command Set Attributes 00:27:04.165 ========================== 00:27:04.165 Submission Queue Entry Size 00:27:04.165 Max: 1 00:27:04.165 Min: 1 00:27:04.165 Completion Queue Entry Size 00:27:04.165 Max: 1 00:27:04.165 Min: 1 00:27:04.165 Number of Namespaces: 0 00:27:04.165 Compare Command: Not Supported 00:27:04.165 Write Uncorrectable Command: Not Supported 00:27:04.165 Dataset Management Command: Not Supported 00:27:04.165 Write Zeroes Command: Not Supported 00:27:04.165 Set Features Save Field: Not Supported 00:27:04.165 Reservations: Not Supported 00:27:04.165 Timestamp: Not Supported 00:27:04.165 Copy: Not Supported 00:27:04.165 Volatile Write Cache: Not Present 00:27:04.165 Atomic Write Unit (Normal): 1 00:27:04.165 Atomic Write Unit (PFail): 1 00:27:04.165 Atomic Compare & Write Unit: 1 00:27:04.165 Fused Compare & Write: Supported 00:27:04.165 Scatter-Gather List 00:27:04.165 SGL Command Set: Supported 00:27:04.165 SGL Keyed: Supported 00:27:04.165 SGL Bit Bucket Descriptor: Not Supported 00:27:04.165 SGL Metadata Pointer: Not Supported 00:27:04.165 Oversized SGL: Not Supported 00:27:04.165 SGL Metadata Address: Not Supported 00:27:04.165 SGL Offset: Supported 00:27:04.165 Transport SGL Data Block: Not Supported 00:27:04.165 Replay Protected Memory Block: Not Supported 00:27:04.165 00:27:04.165 Firmware Slot Information 00:27:04.165 ========================= 00:27:04.165 Active slot: 0 00:27:04.165 00:27:04.165 00:27:04.165 Error Log 00:27:04.165 ========= 00:27:04.165 00:27:04.165 Active Namespaces 00:27:04.165 ================= 00:27:04.165 Discovery Log Page 00:27:04.165 ================== 00:27:04.165 Generation Counter: 2 00:27:04.165 Number of Records: 2 00:27:04.165 Record Format: 0 00:27:04.165 00:27:04.165 Discovery Log Entry 0 00:27:04.165 ---------------------- 00:27:04.165 Transport Type: 3 (TCP) 00:27:04.165 Address Family: 1 (IPv4) 00:27:04.165 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:04.165 Entry Flags: 00:27:04.165 Duplicate Returned Information: 1 00:27:04.165 Explicit Persistent Connection Support for Discovery: 1 00:27:04.165 Transport Requirements: 00:27:04.165 Secure Channel: Not Required 00:27:04.165 Port ID: 0 (0x0000) 00:27:04.165 Controller ID: 65535 (0xffff) 00:27:04.165 Admin Max SQ Size: 128 00:27:04.165 Transport Service Identifier: 4420 00:27:04.165 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:04.165 Transport Address: 10.0.0.2 00:27:04.165 Discovery Log Entry 1 00:27:04.165 ---------------------- 00:27:04.165 Transport Type: 3 (TCP) 00:27:04.165 Address Family: 1 (IPv4) 00:27:04.165 Subsystem Type: 2 (NVM Subsystem) 00:27:04.165 Entry Flags: 00:27:04.165 Duplicate Returned Information: 0 00:27:04.165 Explicit Persistent Connection Support for Discovery: 0 00:27:04.165 Transport Requirements: 00:27:04.165 Secure Channel: Not Required 00:27:04.165 Port ID: 0 (0x0000) 00:27:04.165 Controller ID: 65535 (0xffff) 00:27:04.165 Admin Max SQ Size: 128 00:27:04.165 Transport Service Identifier: 4420 00:27:04.165 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:04.165 Transport Address: 10.0.0.2 [2024-07-22 12:22:11.832809] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:04.165 [2024-07-22 12:22:11.832833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf4f80) on tqpair=0xda6630 00:27:04.165 [2024-07-22 12:22:11.832846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.165 [2024-07-22 12:22:11.832855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5100) on tqpair=0xda6630 00:27:04.165 [2024-07-22 12:22:11.832863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.165 [2024-07-22 12:22:11.832871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5280) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.832879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.166 [2024-07-22 12:22:11.832887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.832895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.166 [2024-07-22 12:22:11.832913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.832922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.832929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.832940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.832966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.833180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.833193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.833200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.833218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.833243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.833269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.833398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.833410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.833417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.833433] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:04.166 [2024-07-22 12:22:11.833441] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:04.166 [2024-07-22 12:22:11.833457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.833486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.833508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.833629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.833645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.833652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.833677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.833704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.833725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.833832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.833844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.833851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.833874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.833889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.833899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.833920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.834027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.834043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.834049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.834073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.834099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.834120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.834225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.834238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.834245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.834267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.834293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.834318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.834429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.834444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.834451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.834474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.834500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.834521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.834626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.834640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.834648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.834671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.834697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.834718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.834822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.834834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.834841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.834864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.834879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.834890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.834911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.835033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.835046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.835053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.835076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.835102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.835122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.835266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.835279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.835286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.835309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.166 [2024-07-22 12:22:11.835334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.166 [2024-07-22 12:22:11.835355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.166 [2024-07-22 12:22:11.835469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.166 [2024-07-22 12:22:11.835484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.166 [2024-07-22 12:22:11.835491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.166 [2024-07-22 12:22:11.835498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.166 [2024-07-22 12:22:11.835514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.835540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.835561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.835685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.835699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.835706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.835729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.835755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.835776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.835884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.835900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.835907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.835930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.835946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.835956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.835977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.836086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.836101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.836109] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.836132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.836158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.836178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.836280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.836292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.836299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.836322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.836348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.836368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.836477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.836493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.836499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.836523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.836539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.836549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.836570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.840627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.840644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.840651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.840658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.840674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.840698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.840704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda6630) 00:27:04.167 [2024-07-22 12:22:11.840715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.840738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf5400, cid 3, qid 0 00:27:04.167 [2024-07-22 12:22:11.840886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.840899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.840909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.840917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf5400) on tqpair=0xda6630 00:27:04.167 [2024-07-22 12:22:11.840930] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:04.167 00:27:04.167 12:22:11 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:04.167 [2024-07-22 12:22:11.877053] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:27:04.167 [2024-07-22 12:22:11.877100] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083922 ] 00:27:04.167 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.167 [2024-07-22 12:22:11.893925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.167 [2024-07-22 12:22:11.911349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:04.167 [2024-07-22 12:22:11.911391] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:04.167 [2024-07-22 12:22:11.911400] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:04.167 [2024-07-22 12:22:11.911417] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:04.167 [2024-07-22 12:22:11.911426] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:04.167 [2024-07-22 12:22:11.911670] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:04.167 [2024-07-22 12:22:11.911710] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15ef630 0 00:27:04.167 [2024-07-22 12:22:11.922624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:04.167 [2024-07-22 12:22:11.922642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:04.167 [2024-07-22 12:22:11.922649] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:04.167 [2024-07-22 12:22:11.922655] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:04.167 [2024-07-22 12:22:11.922706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.922718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.922725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.167 [2024-07-22 12:22:11.922740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:04.167 [2024-07-22 12:22:11.922766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.167 [2024-07-22 12:22:11.930645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.930662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.930669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.930676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.167 [2024-07-22 12:22:11.930693] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:04.167 [2024-07-22 12:22:11.930720] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:04.167 [2024-07-22 12:22:11.930729] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:04.167 [2024-07-22 12:22:11.930755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.930764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.930771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.167 [2024-07-22 12:22:11.930783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.930807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.167 [2024-07-22 12:22:11.930925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.930940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.167 [2024-07-22 12:22:11.930947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.930954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.167 [2024-07-22 12:22:11.930966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:04.167 [2024-07-22 12:22:11.930981] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:04.167 [2024-07-22 12:22:11.930993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.931001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.167 [2024-07-22 12:22:11.931007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.167 [2024-07-22 12:22:11.931018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.167 [2024-07-22 12:22:11.931040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.167 [2024-07-22 12:22:11.931153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.167 [2024-07-22 12:22:11.931165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.931172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.931187] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:04.168 [2024-07-22 12:22:11.931201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:04.168 [2024-07-22 12:22:11.931213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.931237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.168 [2024-07-22 12:22:11.931259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.931364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.168 [2024-07-22 12:22:11.931377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.931383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.931399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:04.168 [2024-07-22 12:22:11.931415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.931445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.168 [2024-07-22 12:22:11.931467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.931575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.168 [2024-07-22 12:22:11.931590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.931597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.931611] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:04.168 [2024-07-22 12:22:11.931629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:04.168 [2024-07-22 12:22:11.931644] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:04.168 [2024-07-22 12:22:11.931753] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:04.168 [2024-07-22 12:22:11.931760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:04.168 [2024-07-22 12:22:11.931772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.931796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.168 [2024-07-22 12:22:11.931818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.931926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.168 [2024-07-22 12:22:11.931938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.931945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.931960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:04.168 [2024-07-22 12:22:11.931976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.931991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.932002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.168 [2024-07-22 12:22:11.932023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.932130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.168 [2024-07-22 12:22:11.932145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.932152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.932159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.932166] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:04.168 [2024-07-22 12:22:11.932175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:04.168 [2024-07-22 12:22:11.932192] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:04.168 [2024-07-22 12:22:11.932206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:04.168 [2024-07-22 12:22:11.932219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.932226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.932237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.168 [2024-07-22 12:22:11.932259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.932405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.168 [2024-07-22 12:22:11.932417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.168 [2024-07-22 12:22:11.932424] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.932430] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=4096, cccid=0 00:27:04.168 [2024-07-22 12:22:11.932438] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163df80) on tqpair(0x15ef630): expected_datao=0, payload_size=4096 00:27:04.168 [2024-07-22 12:22:11.932445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.932461] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.932470] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.168 [2024-07-22 12:22:11.973664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.973672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.973694] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:04.168 [2024-07-22 12:22:11.973704] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:04.168 [2024-07-22 12:22:11.973712] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:04.168 [2024-07-22 12:22:11.973718] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:04.168 [2024-07-22 12:22:11.973726] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:04.168 [2024-07-22 12:22:11.973734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:04.168 [2024-07-22 12:22:11.973749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:04.168 [2024-07-22 12:22:11.973762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.973788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:04.168 [2024-07-22 12:22:11.973812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.973922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.168 [2024-07-22 12:22:11.973937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.168 [2024-07-22 12:22:11.973944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.168 [2024-07-22 12:22:11.973965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.973980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.973990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.168 [2024-07-22 12:22:11.974001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.974022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.168 [2024-07-22 12:22:11.974032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.974054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.168 [2024-07-22 12:22:11.974064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.974085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.168 [2024-07-22 12:22:11.974095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:04.168 [2024-07-22 12:22:11.974128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:04.168 [2024-07-22 12:22:11.974141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.168 [2024-07-22 12:22:11.974149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.168 [2024-07-22 12:22:11.974159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.168 [2024-07-22 12:22:11.974182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163df80, cid 0, qid 0 00:27:04.168 [2024-07-22 12:22:11.974209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e100, cid 1, qid 0 00:27:04.169 [2024-07-22 12:22:11.974217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e280, cid 2, qid 0 00:27:04.169 [2024-07-22 12:22:11.974225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.169 [2024-07-22 12:22:11.974233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.169 [2024-07-22 12:22:11.974369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.169 [2024-07-22 12:22:11.974384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.169 [2024-07-22 12:22:11.974391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.169 [2024-07-22 12:22:11.974406] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:04.169 [2024-07-22 12:22:11.974415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.974429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.974445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.974457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.169 [2024-07-22 12:22:11.974481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:04.169 [2024-07-22 12:22:11.974503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.169 [2024-07-22 12:22:11.974626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.169 [2024-07-22 12:22:11.974642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.169 [2024-07-22 12:22:11.974649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.169 [2024-07-22 12:22:11.974726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.974749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.974764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.169 [2024-07-22 12:22:11.974782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.169 [2024-07-22 12:22:11.974804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.169 [2024-07-22 12:22:11.974940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.169 [2024-07-22 12:22:11.974952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.169 [2024-07-22 12:22:11.974959] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974966] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=4096, cccid=4 00:27:04.169 [2024-07-22 12:22:11.974973] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163e580) on tqpair(0x15ef630): expected_datao=0, payload_size=4096 00:27:04.169 [2024-07-22 12:22:11.974981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974991] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.974998] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.169 [2024-07-22 12:22:11.975020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.169 [2024-07-22 12:22:11.975026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.169 [2024-07-22 12:22:11.975050] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:04.169 [2024-07-22 12:22:11.975071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.169 [2024-07-22 12:22:11.975124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.169 [2024-07-22 12:22:11.975147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.169 [2024-07-22 12:22:11.975285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.169 [2024-07-22 12:22:11.975300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.169 [2024-07-22 12:22:11.975307] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975314] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=4096, cccid=4 00:27:04.169 [2024-07-22 12:22:11.975321] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163e580) on tqpair(0x15ef630): expected_datao=0, payload_size=4096 00:27:04.169 [2024-07-22 12:22:11.975329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975339] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.169 [2024-07-22 12:22:11.975367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.169 [2024-07-22 12:22:11.975374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.169 [2024-07-22 12:22:11.975405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.169 [2024-07-22 12:22:11.975457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.169 [2024-07-22 12:22:11.975478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.169 [2024-07-22 12:22:11.975602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.169 [2024-07-22 12:22:11.975622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.169 [2024-07-22 12:22:11.975630] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975636] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=4096, cccid=4 00:27:04.169 [2024-07-22 12:22:11.975644] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163e580) on tqpair(0x15ef630): expected_datao=0, payload_size=4096 00:27:04.169 [2024-07-22 12:22:11.975651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975661] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975668] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.169 [2024-07-22 12:22:11.975689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.169 [2024-07-22 12:22:11.975696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.169 [2024-07-22 12:22:11.975715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975787] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:04.169 [2024-07-22 12:22:11.975795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:04.169 [2024-07-22 12:22:11.975804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:04.169 [2024-07-22 12:22:11.975823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.169 [2024-07-22 12:22:11.975831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.169 [2024-07-22 12:22:11.975842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.169 [2024-07-22 12:22:11.975853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.975860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.975866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.975876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.170 [2024-07-22 12:22:11.975901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.170 [2024-07-22 12:22:11.975913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e700, cid 5, qid 0 00:27:04.170 [2024-07-22 12:22:11.976035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.976051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.976057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.976074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.976083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.976090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e700) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.976112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e700, cid 5, qid 0 00:27:04.170 [2024-07-22 12:22:11.976267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.976282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.976289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e700) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.976311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e700, cid 5, qid 0 00:27:04.170 [2024-07-22 12:22:11.976463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.976475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.976482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e700) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.976504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e700, cid 5, qid 0 00:27:04.170 [2024-07-22 12:22:11.976647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.976661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.976667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e700) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.976697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.976795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15ef630) 00:27:04.170 [2024-07-22 12:22:11.976805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.170 [2024-07-22 12:22:11.976828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e700, cid 5, qid 0 00:27:04.170 [2024-07-22 12:22:11.976839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e580, cid 4, qid 0 00:27:04.170 [2024-07-22 12:22:11.976847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e880, cid 6, qid 0 00:27:04.170 [2024-07-22 12:22:11.976855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ea00, cid 7, qid 0 00:27:04.170 [2024-07-22 12:22:11.977048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.170 [2024-07-22 12:22:11.977060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.170 [2024-07-22 12:22:11.977071] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977078] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=8192, cccid=5 00:27:04.170 [2024-07-22 12:22:11.977085] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163e700) on tqpair(0x15ef630): expected_datao=0, payload_size=8192 00:27:04.170 [2024-07-22 12:22:11.977092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977133] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977144] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.170 [2024-07-22 12:22:11.977161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.170 [2024-07-22 12:22:11.977167] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977174] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=512, cccid=4 00:27:04.170 [2024-07-22 12:22:11.977181] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163e580) on tqpair(0x15ef630): expected_datao=0, payload_size=512 00:27:04.170 [2024-07-22 12:22:11.977188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977197] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977204] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.170 [2024-07-22 12:22:11.977221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.170 [2024-07-22 12:22:11.977228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977234] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=512, cccid=6 00:27:04.170 [2024-07-22 12:22:11.977241] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163e880) on tqpair(0x15ef630): expected_datao=0, payload_size=512 00:27:04.170 [2024-07-22 12:22:11.977248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977257] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977264] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:04.170 [2024-07-22 12:22:11.977281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:04.170 [2024-07-22 12:22:11.977288] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977294] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ef630): datao=0, datal=4096, cccid=7 00:27:04.170 [2024-07-22 12:22:11.977301] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163ea00) on tqpair(0x15ef630): expected_datao=0, payload_size=4096 00:27:04.170 [2024-07-22 12:22:11.977308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977318] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977324] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.977345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.977352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e700) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.977376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.977387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.977393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e580) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.977432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.977442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.977449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e880) on tqpair=0x15ef630 00:27:04.170 [2024-07-22 12:22:11.977465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.170 [2024-07-22 12:22:11.977474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.170 [2024-07-22 12:22:11.977480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.170 [2024-07-22 12:22:11.977501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ea00) on tqpair=0x15ef630 00:27:04.170 ===================================================== 00:27:04.170 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.170 ===================================================== 00:27:04.170 Controller Capabilities/Features 00:27:04.170 ================================ 00:27:04.170 Vendor ID: 8086 00:27:04.170 Subsystem Vendor ID: 8086 00:27:04.170 Serial Number: SPDK00000000000001 00:27:04.170 Model Number: SPDK bdev Controller 00:27:04.170 Firmware Version: 24.09 00:27:04.170 Recommended Arb Burst: 6 00:27:04.170 IEEE OUI Identifier: e4 d2 5c 00:27:04.170 Multi-path I/O 00:27:04.170 May have multiple subsystem ports: Yes 00:27:04.170 May have multiple controllers: Yes 00:27:04.170 Associated with SR-IOV VF: No 00:27:04.170 Max Data Transfer Size: 131072 00:27:04.170 Max Number of Namespaces: 32 00:27:04.170 Max Number of I/O Queues: 127 00:27:04.170 NVMe Specification Version (VS): 1.3 00:27:04.170 NVMe Specification Version (Identify): 1.3 00:27:04.171 Maximum Queue Entries: 128 00:27:04.171 Contiguous Queues Required: Yes 00:27:04.171 Arbitration Mechanisms Supported 00:27:04.171 Weighted Round Robin: Not Supported 00:27:04.171 Vendor Specific: Not Supported 00:27:04.171 Reset Timeout: 15000 ms 00:27:04.171 Doorbell Stride: 4 bytes 00:27:04.171 NVM Subsystem Reset: Not Supported 00:27:04.171 Command Sets Supported 00:27:04.171 NVM Command Set: Supported 00:27:04.171 Boot Partition: Not Supported 00:27:04.171 Memory Page Size Minimum: 4096 bytes 00:27:04.171 Memory Page Size Maximum: 4096 bytes 00:27:04.171 Persistent Memory Region: Not Supported 00:27:04.171 Optional Asynchronous Events Supported 00:27:04.171 Namespace Attribute Notices: Supported 00:27:04.171 Firmware Activation Notices: Not Supported 00:27:04.171 ANA Change Notices: Not Supported 00:27:04.171 PLE Aggregate Log Change Notices: Not Supported 00:27:04.171 LBA Status Info Alert Notices: Not Supported 00:27:04.171 EGE Aggregate Log Change Notices: Not Supported 00:27:04.171 Normal NVM Subsystem Shutdown event: Not Supported 00:27:04.171 Zone Descriptor Change Notices: Not Supported 00:27:04.171 Discovery Log Change Notices: Not Supported 00:27:04.171 Controller Attributes 00:27:04.171 128-bit Host Identifier: Supported 00:27:04.171 Non-Operational Permissive Mode: Not Supported 00:27:04.171 NVM Sets: Not Supported 00:27:04.171 Read Recovery Levels: Not Supported 00:27:04.171 Endurance Groups: Not Supported 00:27:04.171 Predictable Latency Mode: Not Supported 00:27:04.171 Traffic Based Keep ALive: Not Supported 00:27:04.171 Namespace Granularity: Not Supported 00:27:04.171 SQ Associations: Not Supported 00:27:04.171 UUID List: Not Supported 00:27:04.171 Multi-Domain Subsystem: Not Supported 00:27:04.171 Fixed Capacity Management: Not Supported 00:27:04.171 Variable Capacity Management: Not Supported 00:27:04.171 Delete Endurance Group: Not Supported 00:27:04.171 Delete NVM Set: Not Supported 00:27:04.171 Extended LBA Formats Supported: Not Supported 00:27:04.171 Flexible Data Placement Supported: Not Supported 00:27:04.171 00:27:04.171 Controller Memory Buffer Support 00:27:04.171 ================================ 00:27:04.171 Supported: No 00:27:04.171 00:27:04.171 Persistent Memory Region Support 00:27:04.171 ================================ 00:27:04.171 Supported: No 00:27:04.171 00:27:04.171 Admin Command Set Attributes 00:27:04.171 ============================ 00:27:04.171 Security Send/Receive: Not Supported 00:27:04.171 Format NVM: Not Supported 00:27:04.171 Firmware Activate/Download: Not Supported 00:27:04.171 Namespace Management: Not Supported 00:27:04.171 Device Self-Test: Not Supported 00:27:04.171 Directives: Not Supported 00:27:04.171 NVMe-MI: Not Supported 00:27:04.171 Virtualization Management: Not Supported 00:27:04.171 Doorbell Buffer Config: Not Supported 00:27:04.171 Get LBA Status Capability: Not Supported 00:27:04.171 Command & Feature Lockdown Capability: Not Supported 00:27:04.171 Abort Command Limit: 4 00:27:04.171 Async Event Request Limit: 4 00:27:04.171 Number of Firmware Slots: N/A 00:27:04.171 Firmware Slot 1 Read-Only: N/A 00:27:04.171 Firmware Activation Without Reset: N/A 00:27:04.171 Multiple Update Detection Support: N/A 00:27:04.171 Firmware Update Granularity: No Information Provided 00:27:04.171 Per-Namespace SMART Log: No 00:27:04.171 Asymmetric Namespace Access Log Page: Not Supported 00:27:04.171 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:04.171 Command Effects Log Page: Supported 00:27:04.171 Get Log Page Extended Data: Supported 00:27:04.171 Telemetry Log Pages: Not Supported 00:27:04.171 Persistent Event Log Pages: Not Supported 00:27:04.171 Supported Log Pages Log Page: May Support 00:27:04.171 Commands Supported & Effects Log Page: Not Supported 00:27:04.171 Feature Identifiers & Effects Log Page:May Support 00:27:04.171 NVMe-MI Commands & Effects Log Page: May Support 00:27:04.171 Data Area 4 for Telemetry Log: Not Supported 00:27:04.171 Error Log Page Entries Supported: 128 00:27:04.171 Keep Alive: Supported 00:27:04.171 Keep Alive Granularity: 10000 ms 00:27:04.171 00:27:04.171 NVM Command Set Attributes 00:27:04.171 ========================== 00:27:04.171 Submission Queue Entry Size 00:27:04.171 Max: 64 00:27:04.171 Min: 64 00:27:04.171 Completion Queue Entry Size 00:27:04.171 Max: 16 00:27:04.171 Min: 16 00:27:04.171 Number of Namespaces: 32 00:27:04.171 Compare Command: Supported 00:27:04.171 Write Uncorrectable Command: Not Supported 00:27:04.171 Dataset Management Command: Supported 00:27:04.171 Write Zeroes Command: Supported 00:27:04.171 Set Features Save Field: Not Supported 00:27:04.171 Reservations: Supported 00:27:04.171 Timestamp: Not Supported 00:27:04.171 Copy: Supported 00:27:04.171 Volatile Write Cache: Present 00:27:04.171 Atomic Write Unit (Normal): 1 00:27:04.171 Atomic Write Unit (PFail): 1 00:27:04.171 Atomic Compare & Write Unit: 1 00:27:04.171 Fused Compare & Write: Supported 00:27:04.171 Scatter-Gather List 00:27:04.171 SGL Command Set: Supported 00:27:04.171 SGL Keyed: Supported 00:27:04.171 SGL Bit Bucket Descriptor: Not Supported 00:27:04.171 SGL Metadata Pointer: Not Supported 00:27:04.171 Oversized SGL: Not Supported 00:27:04.171 SGL Metadata Address: Not Supported 00:27:04.171 SGL Offset: Supported 00:27:04.171 Transport SGL Data Block: Not Supported 00:27:04.171 Replay Protected Memory Block: Not Supported 00:27:04.171 00:27:04.171 Firmware Slot Information 00:27:04.171 ========================= 00:27:04.171 Active slot: 1 00:27:04.171 Slot 1 Firmware Revision: 24.09 00:27:04.171 00:27:04.171 00:27:04.171 Commands Supported and Effects 00:27:04.171 ============================== 00:27:04.171 Admin Commands 00:27:04.171 -------------- 00:27:04.171 Get Log Page (02h): Supported 00:27:04.171 Identify (06h): Supported 00:27:04.171 Abort (08h): Supported 00:27:04.171 Set Features (09h): Supported 00:27:04.171 Get Features (0Ah): Supported 00:27:04.171 Asynchronous Event Request (0Ch): Supported 00:27:04.171 Keep Alive (18h): Supported 00:27:04.171 I/O Commands 00:27:04.171 ------------ 00:27:04.171 Flush (00h): Supported LBA-Change 00:27:04.171 Write (01h): Supported LBA-Change 00:27:04.171 Read (02h): Supported 00:27:04.171 Compare (05h): Supported 00:27:04.171 Write Zeroes (08h): Supported LBA-Change 00:27:04.171 Dataset Management (09h): Supported LBA-Change 00:27:04.171 Copy (19h): Supported LBA-Change 00:27:04.171 00:27:04.171 Error Log 00:27:04.171 ========= 00:27:04.171 00:27:04.171 Arbitration 00:27:04.171 =========== 00:27:04.171 Arbitration Burst: 1 00:27:04.171 00:27:04.171 Power Management 00:27:04.171 ================ 00:27:04.171 Number of Power States: 1 00:27:04.171 Current Power State: Power State #0 00:27:04.171 Power State #0: 00:27:04.171 Max Power: 0.00 W 00:27:04.171 Non-Operational State: Operational 00:27:04.171 Entry Latency: Not Reported 00:27:04.171 Exit Latency: Not Reported 00:27:04.171 Relative Read Throughput: 0 00:27:04.171 Relative Read Latency: 0 00:27:04.171 Relative Write Throughput: 0 00:27:04.171 Relative Write Latency: 0 00:27:04.171 Idle Power: Not Reported 00:27:04.171 Active Power: Not Reported 00:27:04.171 Non-Operational Permissive Mode: Not Supported 00:27:04.171 00:27:04.171 Health Information 00:27:04.171 ================== 00:27:04.171 Critical Warnings: 00:27:04.171 Available Spare Space: OK 00:27:04.171 Temperature: OK 00:27:04.171 Device Reliability: OK 00:27:04.171 Read Only: No 00:27:04.171 Volatile Memory Backup: OK 00:27:04.171 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:04.171 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:04.171 Available Spare: 0% 00:27:04.171 Available Spare Threshold: 0% 00:27:04.171 Life Percentage Used:[2024-07-22 12:22:11.981635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.171 [2024-07-22 12:22:11.981649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15ef630) 00:27:04.171 [2024-07-22 12:22:11.981660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.171 [2024-07-22 12:22:11.981684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ea00, cid 7, qid 0 00:27:04.171 [2024-07-22 12:22:11.981826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.171 [2024-07-22 12:22:11.981842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.171 [2024-07-22 12:22:11.981849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.171 [2024-07-22 12:22:11.981856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ea00) on tqpair=0x15ef630 00:27:04.171 [2024-07-22 12:22:11.981899] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:04.171 [2024-07-22 12:22:11.981919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163df80) on tqpair=0x15ef630 00:27:04.171 [2024-07-22 12:22:11.981929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.171 [2024-07-22 12:22:11.981938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e100) on tqpair=0x15ef630 00:27:04.171 [2024-07-22 12:22:11.981945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.171 [2024-07-22 12:22:11.981954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e280) on tqpair=0x15ef630 00:27:04.171 [2024-07-22 12:22:11.981961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.172 [2024-07-22 12:22:11.981969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.981977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.172 [2024-07-22 12:22:11.982004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.982028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.982051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.982177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.982190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.982196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.982214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.982244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.982271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.982388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.982400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.982407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.982421] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:04.172 [2024-07-22 12:22:11.982429] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:04.172 [2024-07-22 12:22:11.982445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.982470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.982491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.982596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.982608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.982622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.982646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.982673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.982695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.982798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.982810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.982817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.982840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.982855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.982865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.982887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.982988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.983000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.983007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.983033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.983060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.983081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.983181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.983193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.983200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.983222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.983248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.983269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.983375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.983388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.983395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.983417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.983443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.983464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.983564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.983576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.983582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.983605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.983639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.983660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.983768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.983783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.983790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.983813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.983843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.983865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.983967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.983979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.983986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.983992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.984008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.984017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.984023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.984034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.984055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.984157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.984169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.984176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.984182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.984198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.984207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.984213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.172 [2024-07-22 12:22:11.984224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.172 [2024-07-22 12:22:11.984245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.172 [2024-07-22 12:22:11.984351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.172 [2024-07-22 12:22:11.984366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.172 [2024-07-22 12:22:11.984372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.172 [2024-07-22 12:22:11.984379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.172 [2024-07-22 12:22:11.984395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.984421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.984442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.984543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.984555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.984562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.984584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.984620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.984643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.984750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.984765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.984771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.984794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.984820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.984842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.984951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.984965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.984972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.984979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.984995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.985021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.985043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.985143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.985155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.985161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.985184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.985210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.985231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.985334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.985346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.985353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.985375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.985405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.985427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.985532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.985543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.985550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.985573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.985588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.985599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.989621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.989642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.989652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.989659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.989666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.989698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.989708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.989714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ef630) 00:27:04.173 [2024-07-22 12:22:11.989725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.173 [2024-07-22 12:22:11.989748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163e400, cid 3, qid 0 00:27:04.173 [2024-07-22 12:22:11.989870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:04.173 [2024-07-22 12:22:11.989883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:04.173 [2024-07-22 12:22:11.989891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:04.173 [2024-07-22 12:22:11.989897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163e400) on tqpair=0x15ef630 00:27:04.173 [2024-07-22 12:22:11.989910] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:04.173 0% 00:27:04.173 Data Units Read: 0 00:27:04.173 Data Units Written: 0 00:27:04.173 Host Read Commands: 0 00:27:04.173 Host Write Commands: 0 00:27:04.173 Controller Busy Time: 0 minutes 00:27:04.173 Power Cycles: 0 00:27:04.173 Power On Hours: 0 hours 00:27:04.173 Unsafe Shutdowns: 0 00:27:04.173 Unrecoverable Media Errors: 0 00:27:04.173 Lifetime Error Log Entries: 0 00:27:04.173 Warning Temperature Time: 0 minutes 00:27:04.173 Critical Temperature Time: 0 minutes 00:27:04.173 00:27:04.173 Number of Queues 00:27:04.173 ================ 00:27:04.173 Number of I/O Submission Queues: 127 00:27:04.173 Number of I/O Completion Queues: 127 00:27:04.173 00:27:04.173 Active Namespaces 00:27:04.173 ================= 00:27:04.173 Namespace ID:1 00:27:04.173 Error Recovery Timeout: Unlimited 00:27:04.173 Command Set Identifier: NVM (00h) 00:27:04.173 Deallocate: Supported 00:27:04.173 Deallocated/Unwritten Error: Not Supported 00:27:04.173 Deallocated Read Value: Unknown 00:27:04.173 Deallocate in Write Zeroes: Not Supported 00:27:04.173 Deallocated Guard Field: 0xFFFF 00:27:04.173 Flush: Supported 00:27:04.173 Reservation: Supported 00:27:04.173 Namespace Sharing Capabilities: Multiple Controllers 00:27:04.173 Size (in LBAs): 131072 (0GiB) 00:27:04.173 Capacity (in LBAs): 131072 (0GiB) 00:27:04.173 Utilization (in LBAs): 131072 (0GiB) 00:27:04.173 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:04.173 EUI64: ABCDEF0123456789 00:27:04.173 UUID: dd91c931-f776-4cce-9f89-f319ece6d1ad 00:27:04.173 Thin Provisioning: Not Supported 00:27:04.173 Per-NS Atomic Units: Yes 00:27:04.173 Atomic Boundary Size (Normal): 0 00:27:04.173 Atomic Boundary Size (PFail): 0 00:27:04.173 Atomic Boundary Offset: 0 00:27:04.173 Maximum Single Source Range Length: 65535 00:27:04.173 Maximum Copy Length: 65535 00:27:04.173 Maximum Source Range Count: 1 00:27:04.173 NGUID/EUI64 Never Reused: No 00:27:04.173 Namespace Write Protected: No 00:27:04.173 Number of LBA Formats: 1 00:27:04.174 Current LBA Format: LBA Format #00 00:27:04.174 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:04.174 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.174 rmmod nvme_tcp 00:27:04.174 rmmod nvme_fabrics 00:27:04.174 rmmod nvme_keyring 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1083886 ']' 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1083886 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1083886 ']' 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1083886 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1083886 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1083886' 00:27:04.174 killing process with pid 1083886 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1083886 00:27:04.174 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1083886 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.432 12:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.958 12:22:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.958 00:27:06.958 real 0m5.344s 00:27:06.958 user 0m4.327s 00:27:06.958 sys 0m1.844s 00:27:06.958 12:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:06.958 12:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:06.958 ************************************ 00:27:06.958 END TEST nvmf_identify 00:27:06.958 ************************************ 00:27:06.958 12:22:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:06.958 12:22:14 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:06.958 12:22:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:06.958 12:22:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.958 12:22:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.958 ************************************ 00:27:06.958 START TEST nvmf_perf 00:27:06.958 ************************************ 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:06.958 * Looking for test storage... 00:27:06.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.958 12:22:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:08.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.854 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:08.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:08.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:08.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:08.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:27:08.855 00:27:08.855 --- 10.0.0.2 ping statistics --- 00:27:08.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.855 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:08.855 00:27:08.855 --- 10.0.0.1 ping statistics --- 00:27:08.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.855 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1085842 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1085842 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1085842 ']' 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.855 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:08.855 [2024-07-22 12:22:16.520425] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:27:08.855 [2024-07-22 12:22:16.520522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.855 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.855 [2024-07-22 12:22:16.561132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:08.855 [2024-07-22 12:22:16.588094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.855 [2024-07-22 12:22:16.686426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.855 [2024-07-22 12:22:16.686481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.855 [2024-07-22 12:22:16.686510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.855 [2024-07-22 12:22:16.686522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.855 [2024-07-22 12:22:16.686532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.855 [2024-07-22 12:22:16.686620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.855 [2024-07-22 12:22:16.686661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.855 [2024-07-22 12:22:16.686711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.855 [2024-07-22 12:22:16.686714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:09.112 12:22:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:12.383 12:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:12.383 12:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:12.383 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:12.383 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:12.639 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:12.639 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:12.639 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:12.639 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:12.639 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:12.895 [2024-07-22 12:22:20.758269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.895 12:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.152 12:22:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:13.152 12:22:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:13.408 12:22:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:13.408 12:22:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:13.666 12:22:21 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.923 [2024-07-22 12:22:21.741871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.923 12:22:21 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.180 12:22:22 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:14.180 12:22:22 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:14.180 12:22:22 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:14.180 12:22:22 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:15.554 Initializing NVMe Controllers 00:27:15.554 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:15.554 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:15.554 Initialization complete. Launching workers. 00:27:15.554 ======================================================== 00:27:15.554 Latency(us) 00:27:15.554 Device Information : IOPS MiB/s Average min max 00:27:15.554 PCIE (0000:88:00.0) NSID 1 from core 0: 85261.25 333.05 374.85 16.38 6510.54 00:27:15.554 ======================================================== 00:27:15.554 Total : 85261.25 333.05 374.85 16.38 6510.54 00:27:15.554 00:27:15.554 12:22:23 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:15.554 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.921 Initializing NVMe Controllers 00:27:16.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:16.921 Initialization complete. Launching workers. 00:27:16.921 ======================================================== 00:27:16.921 Latency(us) 00:27:16.921 Device Information : IOPS MiB/s Average min max 00:27:16.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.00 0.41 9763.89 172.79 45744.64 00:27:16.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19754.21 6969.86 50874.78 00:27:16.921 ======================================================== 00:27:16.921 Total : 155.00 0.61 13051.03 172.79 50874.78 00:27:16.921 00:27:16.921 12:22:24 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.852 Initializing NVMe Controllers 00:27:17.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:17.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:17.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:17.852 Initialization complete. Launching workers. 00:27:17.852 ======================================================== 00:27:17.852 Latency(us) 00:27:17.852 Device Information : IOPS MiB/s Average min max 00:27:17.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8404.99 32.83 3823.16 638.56 7464.89 00:27:17.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3843.00 15.01 8384.30 5025.33 17218.44 00:27:17.852 ======================================================== 00:27:17.852 Total : 12247.99 47.84 5254.29 638.56 17218.44 00:27:17.852 00:27:18.109 12:22:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:18.109 12:22:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:18.109 12:22:25 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:18.109 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.635 Initializing NVMe Controllers 00:27:20.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.635 Controller IO queue size 128, less than required. 00:27:20.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.635 Controller IO queue size 128, less than required. 00:27:20.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:20.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:20.635 Initialization complete. Launching workers. 00:27:20.635 ======================================================== 00:27:20.635 Latency(us) 00:27:20.635 Device Information : IOPS MiB/s Average min max 00:27:20.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1318.26 329.57 98970.59 69231.81 148418.43 00:27:20.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.89 146.72 225885.71 70516.10 342116.99 00:27:20.635 ======================================================== 00:27:20.635 Total : 1905.16 476.29 138067.51 69231.81 342116.99 00:27:20.635 00:27:20.635 12:22:28 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:20.635 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.893 No valid NVMe controllers or AIO or URING devices found 00:27:20.893 Initializing NVMe Controllers 00:27:20.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.893 Controller IO queue size 128, less than required. 00:27:20.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.893 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:20.893 Controller IO queue size 128, less than required. 00:27:20.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.893 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:20.893 WARNING: Some requested NVMe devices were skipped 00:27:20.893 12:22:28 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:20.893 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.424 Initializing NVMe Controllers 00:27:23.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.424 Controller IO queue size 128, less than required. 00:27:23.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:23.424 Controller IO queue size 128, less than required. 00:27:23.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:23.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:23.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:23.424 Initialization complete. Launching workers. 00:27:23.424 00:27:23.424 ==================== 00:27:23.424 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:23.424 TCP transport: 00:27:23.424 polls: 25578 00:27:23.424 idle_polls: 12760 00:27:23.424 sock_completions: 12818 00:27:23.424 nvme_completions: 2697 00:27:23.424 submitted_requests: 3996 00:27:23.424 queued_requests: 1 00:27:23.424 00:27:23.424 ==================== 00:27:23.424 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:23.424 TCP transport: 00:27:23.424 polls: 28016 00:27:23.424 idle_polls: 10797 00:27:23.424 sock_completions: 17219 00:27:23.424 nvme_completions: 5411 00:27:23.424 submitted_requests: 8100 00:27:23.424 queued_requests: 1 00:27:23.424 ======================================================== 00:27:23.424 Latency(us) 00:27:23.424 Device Information : IOPS MiB/s Average min max 00:27:23.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 673.87 168.47 195194.55 104102.49 294893.71 00:27:23.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1352.24 338.06 95680.96 48382.25 145147.80 00:27:23.424 ======================================================== 00:27:23.425 Total : 2026.12 506.53 128778.50 48382.25 294893.71 00:27:23.425 00:27:23.425 12:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:23.683 12:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.683 12:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:23.683 12:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:23.683 12:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a9342bfd-c706-40bf-b441-b6c4f28fd2e1 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a9342bfd-c706-40bf-b441-b6c4f28fd2e1 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a9342bfd-c706-40bf-b441-b6c4f28fd2e1 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:26.957 12:22:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:27.214 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:27.214 { 00:27:27.214 "uuid": "a9342bfd-c706-40bf-b441-b6c4f28fd2e1", 00:27:27.214 "name": "lvs_0", 00:27:27.214 "base_bdev": "Nvme0n1", 00:27:27.214 "total_data_clusters": 238234, 00:27:27.214 "free_clusters": 238234, 00:27:27.214 "block_size": 512, 00:27:27.214 "cluster_size": 4194304 00:27:27.214 } 00:27:27.214 ]' 00:27:27.214 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a9342bfd-c706-40bf-b441-b6c4f28fd2e1") .free_clusters' 00:27:27.214 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:27.214 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a9342bfd-c706-40bf-b441-b6c4f28fd2e1") .cluster_size' 00:27:27.471 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:27.471 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:27.471 12:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:27.471 952936 00:27:27.471 12:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:27.471 12:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:27.471 12:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9342bfd-c706-40bf-b441-b6c4f28fd2e1 lbd_0 20480 00:27:28.034 12:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=aff55f99-0363-40cf-beb7-b3feba8f4c9a 00:27:28.034 12:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore aff55f99-0363-40cf-beb7-b3feba8f4c9a lvs_n_0 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=259e89dd-f144-42cf-8328-b75da585b564 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 259e89dd-f144-42cf-8328-b75da585b564 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=259e89dd-f144-42cf-8328-b75da585b564 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:28.610 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:28.882 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:28.882 { 00:27:28.882 "uuid": "a9342bfd-c706-40bf-b441-b6c4f28fd2e1", 00:27:28.882 "name": "lvs_0", 00:27:28.882 "base_bdev": "Nvme0n1", 00:27:28.882 "total_data_clusters": 238234, 00:27:28.883 "free_clusters": 233114, 00:27:28.883 "block_size": 512, 00:27:28.883 "cluster_size": 4194304 00:27:28.883 }, 00:27:28.883 { 00:27:28.883 "uuid": "259e89dd-f144-42cf-8328-b75da585b564", 00:27:28.883 "name": "lvs_n_0", 00:27:28.883 "base_bdev": "aff55f99-0363-40cf-beb7-b3feba8f4c9a", 00:27:28.883 "total_data_clusters": 5114, 00:27:28.883 "free_clusters": 5114, 00:27:28.883 "block_size": 512, 00:27:28.883 "cluster_size": 4194304 00:27:28.883 } 00:27:28.883 ]' 00:27:28.883 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="259e89dd-f144-42cf-8328-b75da585b564") .free_clusters' 00:27:28.883 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:28.883 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="259e89dd-f144-42cf-8328-b75da585b564") .cluster_size' 00:27:29.139 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:29.139 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:29.139 12:22:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:29.139 20456 00:27:29.139 12:22:36 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:29.139 12:22:36 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 259e89dd-f144-42cf-8328-b75da585b564 lbd_nest_0 20456 00:27:29.396 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6e27a6d0-0b40-438d-a43d-beb1980b113b 00:27:29.396 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.654 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:29.654 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6e27a6d0-0b40-438d-a43d-beb1980b113b 00:27:29.654 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.911 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:29.911 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:29.911 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:29.911 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:29.911 12:22:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:29.911 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.102 Initializing NVMe Controllers 00:27:42.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:42.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:42.102 Initialization complete. Launching workers. 00:27:42.102 ======================================================== 00:27:42.102 Latency(us) 00:27:42.102 Device Information : IOPS MiB/s Average min max 00:27:42.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.49 0.02 22000.85 201.33 45785.37 00:27:42.102 ======================================================== 00:27:42.102 Total : 45.49 0.02 22000.85 201.33 45785.37 00:27:42.102 00:27:42.102 12:22:48 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:42.102 12:22:48 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.102 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.061 Initializing NVMe Controllers 00:27:52.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:52.061 Initialization complete. Launching workers. 00:27:52.061 ======================================================== 00:27:52.061 Latency(us) 00:27:52.061 Device Information : IOPS MiB/s Average min max 00:27:52.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.50 10.06 12467.06 5077.60 47898.28 00:27:52.061 ======================================================== 00:27:52.061 Total : 80.50 10.06 12467.06 5077.60 47898.28 00:27:52.061 00:27:52.061 12:22:58 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:52.061 12:22:58 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:52.061 12:22:58 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:52.061 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.021 Initializing NVMe Controllers 00:28:02.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.021 Initialization complete. Launching workers. 00:28:02.021 ======================================================== 00:28:02.021 Latency(us) 00:28:02.021 Device Information : IOPS MiB/s Average min max 00:28:02.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7442.53 3.63 4299.23 279.54 12076.52 00:28:02.021 ======================================================== 00:28:02.021 Total : 7442.53 3.63 4299.23 279.54 12076.52 00:28:02.021 00:28:02.021 12:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:02.021 12:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.021 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.019 Initializing NVMe Controllers 00:28:12.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:12.019 Initialization complete. Launching workers. 00:28:12.019 ======================================================== 00:28:12.019 Latency(us) 00:28:12.019 Device Information : IOPS MiB/s Average min max 00:28:12.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2512.90 314.11 12746.98 701.23 29887.29 00:28:12.019 ======================================================== 00:28:12.019 Total : 2512.90 314.11 12746.98 701.23 29887.29 00:28:12.019 00:28:12.019 12:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:12.019 12:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:12.019 12:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.019 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.989 Initializing NVMe Controllers 00:28:21.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.989 Controller IO queue size 128, less than required. 00:28:21.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.989 Initialization complete. Launching workers. 00:28:21.989 ======================================================== 00:28:21.989 Latency(us) 00:28:21.989 Device Information : IOPS MiB/s Average min max 00:28:21.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11787.27 5.76 10859.74 1725.06 23364.34 00:28:21.989 ======================================================== 00:28:21.989 Total : 11787.27 5.76 10859.74 1725.06 23364.34 00:28:21.989 00:28:21.989 12:23:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:21.989 12:23:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.989 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.206 Initializing NVMe Controllers 00:28:34.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.206 Controller IO queue size 128, less than required. 00:28:34.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.206 Initialization complete. Launching workers. 00:28:34.206 ======================================================== 00:28:34.206 Latency(us) 00:28:34.206 Device Information : IOPS MiB/s Average min max 00:28:34.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1203.75 150.47 106988.81 16118.10 231040.88 00:28:34.206 ======================================================== 00:28:34.206 Total : 1203.75 150.47 106988.81 16118.10 231040.88 00:28:34.206 00:28:34.206 12:23:39 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.206 12:23:40 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e27a6d0-0b40-438d-a43d-beb1980b113b 00:28:34.206 12:23:40 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aff55f99-0363-40cf-beb7-b3feba8f4c9a 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.206 rmmod nvme_tcp 00:28:34.206 rmmod nvme_fabrics 00:28:34.206 rmmod nvme_keyring 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1085842 ']' 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1085842 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1085842 ']' 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1085842 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085842 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085842' 00:28:34.206 killing process with pid 1085842 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1085842 00:28:34.206 12:23:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1085842 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.578 12:23:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.105 12:23:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:38.105 00:28:38.105 real 1m31.003s 00:28:38.105 user 5m30.779s 00:28:38.105 sys 0m16.927s 00:28:38.105 12:23:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.105 12:23:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:38.105 ************************************ 00:28:38.105 END TEST nvmf_perf 00:28:38.105 ************************************ 00:28:38.105 12:23:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:38.105 12:23:45 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:38.105 12:23:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:38.105 12:23:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.105 12:23:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:38.105 ************************************ 00:28:38.105 START TEST nvmf_fio_host 00:28:38.105 ************************************ 00:28:38.105 12:23:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:38.105 * Looking for test storage... 00:28:38.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.105 12:23:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:38.106 12:23:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.478 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.479 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.479 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.479 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.479 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.479 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.736 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:28:39.737 00:28:39.737 --- 10.0.0.2 ping statistics --- 00:28:39.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.737 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:28:39.737 00:28:39.737 --- 10.0.0.1 ping statistics --- 00:28:39.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.737 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1097925 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1097925 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1097925 ']' 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.737 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.737 [2024-07-22 12:23:47.627734] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:28:39.737 [2024-07-22 12:23:47.627816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.737 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.994 [2024-07-22 12:23:47.667968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:39.994 [2024-07-22 12:23:47.695125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.994 [2024-07-22 12:23:47.780661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.994 [2024-07-22 12:23:47.780729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.994 [2024-07-22 12:23:47.780754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.994 [2024-07-22 12:23:47.780765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.994 [2024-07-22 12:23:47.780775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.994 [2024-07-22 12:23:47.780831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.994 [2024-07-22 12:23:47.780889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.994 [2024-07-22 12:23:47.780954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.994 [2024-07-22 12:23:47.780956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.994 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.994 12:23:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:28:39.994 12:23:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:40.250 [2024-07-22 12:23:48.115921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.250 12:23:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:40.250 12:23:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.250 12:23:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.250 12:23:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:40.506 Malloc1 00:28:40.506 12:23:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.766 12:23:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:41.097 12:23:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.355 [2024-07-22 12:23:49.141560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.355 12:23:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:41.612 12:23:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.869 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:41.869 fio-3.35 00:28:41.869 Starting 1 thread 00:28:41.869 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.392 00:28:44.392 test: (groupid=0, jobs=1): err= 0: pid=1098242: Mon Jul 22 12:23:51 2024 00:28:44.392 read: IOPS=8067, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2007msec) 00:28:44.392 slat (usec): min=2, max=106, avg= 2.57, stdev= 1.46 00:28:44.392 clat (usec): min=2817, max=15226, avg=8752.09, stdev=691.77 00:28:44.392 lat (usec): min=2839, max=15229, avg=8754.66, stdev=691.71 00:28:44.392 clat percentiles (usec): 00:28:44.392 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8225], 00:28:44.392 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:28:44.392 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9765], 00:28:44.392 | 99.00th=[10421], 99.50th=[10552], 99.90th=[11731], 99.95th=[13960], 00:28:44.392 | 99.99th=[15139] 00:28:44.392 bw ( KiB/s): min=31144, max=32712, per=99.97%, avg=32258.00, stdev=745.83, samples=4 00:28:44.392 iops : min= 7786, max= 8178, avg=8064.50, stdev=186.46, samples=4 00:28:44.392 write: IOPS=8050, BW=31.4MiB/s (33.0MB/s)(63.1MiB/2007msec); 0 zone resets 00:28:44.392 slat (usec): min=2, max=110, avg= 2.73, stdev= 1.24 00:28:44.392 clat (usec): min=1248, max=14506, avg=7070.33, stdev=621.48 00:28:44.392 lat (usec): min=1254, max=14509, avg=7073.06, stdev=621.47 00:28:44.392 clat percentiles (usec): 00:28:44.392 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6652], 00:28:44.392 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:28:44.392 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8029], 00:28:44.392 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11863], 99.95th=[12780], 00:28:44.392 | 99.99th=[14484] 00:28:44.392 bw ( KiB/s): min=31936, max=32512, per=99.94%, avg=32182.00, stdev=258.11, samples=4 00:28:44.392 iops : min= 7984, max= 8128, avg=8045.50, stdev=64.53, samples=4 00:28:44.392 lat (msec) : 2=0.01%, 4=0.11%, 10=98.22%, 20=1.66% 00:28:44.392 cpu : usr=56.88%, sys=38.73%, ctx=108, majf=0, minf=40 00:28:44.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:44.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:44.392 issued rwts: total=16191,16157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:44.392 00:28:44.392 Run status group 0 (all jobs): 00:28:44.392 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.3MB), run=2007-2007msec 00:28:44.392 WRITE: bw=31.4MiB/s (33.0MB/s), 31.4MiB/s-31.4MiB/s (33.0MB/s-33.0MB/s), io=63.1MiB (66.2MB), run=2007-2007msec 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:44.392 12:23:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:44.392 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:44.392 fio-3.35 00:28:44.392 Starting 1 thread 00:28:44.392 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.917 00:28:46.917 test: (groupid=0, jobs=1): err= 0: pid=1098621: Mon Jul 22 12:23:54 2024 00:28:46.917 read: IOPS=7141, BW=112MiB/s (117MB/s)(224MiB/2008msec) 00:28:46.917 slat (nsec): min=2881, max=94081, avg=3825.36, stdev=1792.28 00:28:46.917 clat (usec): min=3755, max=53172, avg=10335.19, stdev=4448.00 00:28:46.917 lat (usec): min=3759, max=53176, avg=10339.02, stdev=4447.99 00:28:46.917 clat percentiles (usec): 00:28:46.917 | 1.00th=[ 4817], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7767], 00:28:46.917 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:28:46.917 | 70.00th=[11207], 80.00th=[11994], 90.00th=[13304], 95.00th=[14746], 00:28:46.917 | 99.00th=[20579], 99.50th=[47973], 99.90th=[52167], 99.95th=[52691], 00:28:46.917 | 99.99th=[53216] 00:28:46.917 bw ( KiB/s): min=44192, max=68928, per=50.79%, avg=58032.00, stdev=12296.07, samples=4 00:28:46.917 iops : min= 2762, max= 4308, avg=3627.00, stdev=768.50, samples=4 00:28:46.917 write: IOPS=4149, BW=64.8MiB/s (68.0MB/s)(118MiB/1822msec); 0 zone resets 00:28:46.918 slat (usec): min=30, max=185, avg=34.09, stdev= 5.76 00:28:46.918 clat (usec): min=6128, max=27354, avg=13431.65, stdev=3718.67 00:28:46.918 lat (usec): min=6160, max=27402, avg=13465.73, stdev=3718.81 00:28:46.918 clat percentiles (usec): 00:28:46.918 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:28:46.918 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12780], 60.00th=[14091], 00:28:46.918 | 70.00th=[15926], 80.00th=[17171], 90.00th=[18482], 95.00th=[19530], 00:28:46.918 | 99.00th=[22152], 99.50th=[22676], 99.90th=[26608], 99.95th=[27132], 00:28:46.918 | 99.99th=[27395] 00:28:46.918 bw ( KiB/s): min=46976, max=71680, per=90.49%, avg=60080.00, stdev=12323.44, samples=4 00:28:46.918 iops : min= 2936, max= 4480, avg=3755.00, stdev=770.21, samples=4 00:28:46.918 lat (msec) : 4=0.11%, 10=40.76%, 20=57.12%, 50=1.81%, 100=0.19% 00:28:46.918 cpu : usr=69.81%, sys=26.86%, ctx=60, majf=0, minf=58 00:28:46.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:46.918 issued rwts: total=14340,7561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:46.918 00:28:46.918 Run status group 0 (all jobs): 00:28:46.918 READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=224MiB (235MB), run=2008-2008msec 00:28:46.918 WRITE: bw=64.8MiB/s (68.0MB/s), 64.8MiB/s-64.8MiB/s (68.0MB/s-68.0MB/s), io=118MiB (124MB), run=1822-1822msec 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:28:46.918 12:23:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:28:50.227 Nvme0n1 00:28:50.227 12:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=314e4632-95ff-4f45-a55d-5bf131888889 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 314e4632-95ff-4f45-a55d-5bf131888889 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=314e4632-95ff-4f45-a55d-5bf131888889 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:53.503 12:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:53.504 { 00:28:53.504 "uuid": "314e4632-95ff-4f45-a55d-5bf131888889", 00:28:53.504 "name": "lvs_0", 00:28:53.504 "base_bdev": "Nvme0n1", 00:28:53.504 "total_data_clusters": 930, 00:28:53.504 "free_clusters": 930, 00:28:53.504 "block_size": 512, 00:28:53.504 "cluster_size": 1073741824 00:28:53.504 } 00:28:53.504 ]' 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="314e4632-95ff-4f45-a55d-5bf131888889") .free_clusters' 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="314e4632-95ff-4f45-a55d-5bf131888889") .cluster_size' 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:28:53.504 952320 00:28:53.504 12:24:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:28:53.760 51b396a9-aff6-4437-afe3-3acfc5f00509 00:28:53.760 12:24:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:54.017 12:24:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:54.274 12:24:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:54.531 12:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:54.788 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:54.788 fio-3.35 00:28:54.788 Starting 1 thread 00:28:54.788 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.308 00:28:57.308 test: (groupid=0, jobs=1): err= 0: pid=1100016: Mon Jul 22 12:24:04 2024 00:28:57.308 read: IOPS=5805, BW=22.7MiB/s (23.8MB/s)(45.5MiB/2007msec) 00:28:57.308 slat (usec): min=2, max=142, avg= 3.00, stdev= 2.57 00:28:57.308 clat (usec): min=1044, max=171627, avg=12152.38, stdev=11813.85 00:28:57.308 lat (usec): min=1048, max=171661, avg=12155.38, stdev=11814.16 00:28:57.308 clat percentiles (msec): 00:28:57.308 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:28:57.308 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:28:57.308 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 13], 00:28:57.308 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:28:57.308 | 99.99th=[ 171] 00:28:57.308 bw ( KiB/s): min=16256, max=26216, per=99.71%, avg=23154.00, stdev=4641.23, samples=4 00:28:57.308 iops : min= 4064, max= 6554, avg=5788.50, stdev=1160.31, samples=4 00:28:57.308 write: IOPS=5788, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2007msec); 0 zone resets 00:28:57.308 slat (usec): min=2, max=129, avg= 3.15, stdev= 2.20 00:28:57.308 clat (usec): min=420, max=169382, avg=9809.84, stdev=11085.65 00:28:57.308 lat (usec): min=424, max=169388, avg=9812.99, stdev=11085.95 00:28:57.308 clat percentiles (msec): 00:28:57.308 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:28:57.308 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:28:57.308 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:28:57.308 | 99.00th=[ 12], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:28:57.308 | 99.99th=[ 169] 00:28:57.309 bw ( KiB/s): min=17320, max=25856, per=99.89%, avg=23130.00, stdev=3918.24, samples=4 00:28:57.309 iops : min= 4330, max= 6464, avg=5782.50, stdev=979.56, samples=4 00:28:57.309 lat (usec) : 500=0.01%, 750=0.01% 00:28:57.309 lat (msec) : 2=0.03%, 4=0.11%, 10=48.15%, 20=51.15%, 250=0.55% 00:28:57.309 cpu : usr=53.59%, sys=42.77%, ctx=99, majf=0, minf=40 00:28:57.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:57.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:57.309 issued rwts: total=11651,11618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:57.309 00:28:57.309 Run status group 0 (all jobs): 00:28:57.309 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.5MiB (47.7MB), run=2007-2007msec 00:28:57.309 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2007-2007msec 00:28:57.309 12:24:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:57.309 12:24:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3897d83f-3102-45f7-a95d-1bc653f9a6b1 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3897d83f-3102-45f7-a95d-1bc653f9a6b1 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3897d83f-3102-45f7-a95d-1bc653f9a6b1 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:58.678 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:58.936 { 00:28:58.936 "uuid": "314e4632-95ff-4f45-a55d-5bf131888889", 00:28:58.936 "name": "lvs_0", 00:28:58.936 "base_bdev": "Nvme0n1", 00:28:58.936 "total_data_clusters": 930, 00:28:58.936 "free_clusters": 0, 00:28:58.936 "block_size": 512, 00:28:58.936 "cluster_size": 1073741824 00:28:58.936 }, 00:28:58.936 { 00:28:58.936 "uuid": "3897d83f-3102-45f7-a95d-1bc653f9a6b1", 00:28:58.936 "name": "lvs_n_0", 00:28:58.936 "base_bdev": "51b396a9-aff6-4437-afe3-3acfc5f00509", 00:28:58.936 "total_data_clusters": 237847, 00:28:58.936 "free_clusters": 237847, 00:28:58.936 "block_size": 512, 00:28:58.936 "cluster_size": 4194304 00:28:58.936 } 00:28:58.936 ]' 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3897d83f-3102-45f7-a95d-1bc653f9a6b1") .free_clusters' 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3897d83f-3102-45f7-a95d-1bc653f9a6b1") .cluster_size' 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:28:58.936 951388 00:28:58.936 12:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:28:59.502 ce533f69-d2c8-4c86-a778-538c0290d96e 00:28:59.502 12:24:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:59.759 12:24:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:00.023 12:24:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:00.338 12:24:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.596 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:00.596 fio-3.35 00:29:00.596 Starting 1 thread 00:29:00.596 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.124 00:29:03.124 test: (groupid=0, jobs=1): err= 0: pid=1101167: Mon Jul 22 12:24:10 2024 00:29:03.124 read: IOPS=5833, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec) 00:29:03.124 slat (usec): min=2, max=143, avg= 2.83, stdev= 2.33 00:29:03.124 clat (usec): min=4551, max=19498, avg=12117.86, stdev=1039.19 00:29:03.124 lat (usec): min=4565, max=19501, avg=12120.69, stdev=1039.09 00:29:03.124 clat percentiles (usec): 00:29:03.124 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:29:03.124 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:29:03.124 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:29:03.124 | 99.00th=[14484], 99.50th=[14877], 99.90th=[17957], 99.95th=[18220], 00:29:03.124 | 99.99th=[19530] 00:29:03.124 bw ( KiB/s): min=22184, max=23864, per=99.87%, avg=23304.00, stdev=759.62, samples=4 00:29:03.124 iops : min= 5546, max= 5966, avg=5826.00, stdev=189.91, samples=4 00:29:03.124 write: IOPS=5820, BW=22.7MiB/s (23.8MB/s)(45.7MiB/2009msec); 0 zone resets 00:29:03.124 slat (usec): min=2, max=106, avg= 3.05, stdev= 2.02 00:29:03.124 clat (usec): min=2220, max=17747, avg=9698.57, stdev=905.03 00:29:03.124 lat (usec): min=2226, max=17750, avg=9701.62, stdev=904.94 00:29:03.124 clat percentiles (usec): 00:29:03.124 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 8979], 00:29:03.124 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:29:03.124 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:29:03.124 | 99.00th=[11731], 99.50th=[12125], 99.90th=[15533], 99.95th=[16581], 00:29:03.124 | 99.99th=[16909] 00:29:03.124 bw ( KiB/s): min=23096, max=23432, per=99.94%, avg=23270.00, stdev=182.94, samples=4 00:29:03.124 iops : min= 5774, max= 5858, avg=5817.50, stdev=45.73, samples=4 00:29:03.124 lat (msec) : 4=0.05%, 10=33.12%, 20=66.84% 00:29:03.124 cpu : usr=55.28%, sys=41.19%, ctx=92, majf=0, minf=40 00:29:03.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:03.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:03.124 issued rwts: total=11720,11694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:03.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:03.124 00:29:03.124 Run status group 0 (all jobs): 00:29:03.124 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2009-2009msec 00:29:03.124 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.7MiB (47.9MB), run=2009-2009msec 00:29:03.124 12:24:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:03.381 12:24:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:03.381 12:24:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:07.554 12:24:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:07.554 12:24:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:10.828 12:24:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:10.828 12:24:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.728 rmmod nvme_tcp 00:29:12.728 rmmod nvme_fabrics 00:29:12.728 rmmod nvme_keyring 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1097925 ']' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1097925 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1097925 ']' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1097925 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1097925 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1097925' 00:29:12.728 killing process with pid 1097925 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1097925 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1097925 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.728 12:24:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.631 12:24:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.631 00:29:14.631 real 0m37.071s 00:29:14.631 user 2m20.718s 00:29:14.631 sys 0m7.829s 00:29:14.631 12:24:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.631 12:24:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.631 ************************************ 00:29:14.631 END TEST nvmf_fio_host 00:29:14.631 ************************************ 00:29:14.889 12:24:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:14.889 12:24:22 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:14.889 12:24:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:14.889 12:24:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.889 12:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:14.889 ************************************ 00:29:14.889 START TEST nvmf_failover 00:29:14.889 ************************************ 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:14.889 * Looking for test storage... 00:29:14.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.889 12:24:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.792 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.793 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.793 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:16.793 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.050 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.050 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.050 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:29:17.050 00:29:17.051 --- 10.0.0.2 ping statistics --- 00:29:17.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.051 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:29:17.051 00:29:17.051 --- 10.0.0.1 ping statistics --- 00:29:17.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.051 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1104611 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1104611 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1104611 ']' 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.051 12:24:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.051 [2024-07-22 12:24:24.851103] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:29:17.051 [2024-07-22 12:24:24.851185] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.051 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.051 [2024-07-22 12:24:24.890753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:17.051 [2024-07-22 12:24:24.917384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:17.307 [2024-07-22 12:24:25.008431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.307 [2024-07-22 12:24:25.008504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.307 [2024-07-22 12:24:25.008518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.307 [2024-07-22 12:24:25.008529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.307 [2024-07-22 12:24:25.008539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.307 [2024-07-22 12:24:25.008709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.307 [2024-07-22 12:24:25.008772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.307 [2024-07-22 12:24:25.008776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.307 12:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:17.564 [2024-07-22 12:24:25.422522] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.564 12:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:17.821 Malloc0 00:29:17.821 12:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.078 12:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.336 12:24:26 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.593 [2024-07-22 12:24:26.500298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.593 12:24:26 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:19.158 [2024-07-22 12:24:26.789120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.158 12:24:26 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:19.158 [2024-07-22 12:24:27.045930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1104902 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1104902 /var/tmp/bdevperf.sock 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1104902 ']' 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.158 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:19.454 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.454 12:24:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:19.454 12:24:27 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.018 NVMe0n1 00:29:20.018 12:24:27 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.276 00:29:20.276 12:24:28 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1105041 00:29:20.276 12:24:28 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:20.276 12:24:28 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:21.208 12:24:29 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.465 [2024-07-22 12:24:29.355145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.465 [2024-07-22 12:24:29.355218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.465 [2024-07-22 12:24:29.355234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.465 [2024-07-22 12:24:29.355247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 [2024-07-22 12:24:29.355634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd905c0 is same with the state(5) to be set 00:29:21.466 12:24:29 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:24.742 12:24:32 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.000 00:29:25.000 12:24:32 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:25.256 [2024-07-22 12:24:32.989406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 [2024-07-22 12:24:32.989481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 [2024-07-22 12:24:32.989497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 [2024-07-22 12:24:32.989510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 [2024-07-22 12:24:32.989522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 [2024-07-22 12:24:32.989534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 [2024-07-22 12:24:32.989546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91b60 is same with the state(5) to be set 00:29:25.256 12:24:33 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:28.532 12:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.532 [2024-07-22 12:24:36.252559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.532 12:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:29.534 12:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:29.792 [2024-07-22 12:24:37.553206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 [2024-07-22 12:24:37.553684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92240 is same with the state(5) to be set 00:29:29.792 12:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1105041 00:29:36.348 0 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1104902 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1104902 ']' 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1104902 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1104902 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1104902' 00:29:36.348 killing process with pid 1104902 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1104902 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1104902 00:29:36.348 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:36.348 [2024-07-22 12:24:27.109377] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:29:36.348 [2024-07-22 12:24:27.109475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104902 ] 00:29:36.348 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.348 [2024-07-22 12:24:27.141343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:36.348 [2024-07-22 12:24:27.169321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.348 [2024-07-22 12:24:27.254640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.348 Running I/O for 15 seconds... 00:29:36.348 [2024-07-22 12:24:29.356557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.356982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.356996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.348 [2024-07-22 12:24:29.357556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.348 [2024-07-22 12:24:29.357584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.357974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.357988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.348 [2024-07-22 12:24:29.358002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.348 [2024-07-22 12:24:29.358015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.358775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.358980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.358993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.349 [2024-07-22 12:24:29.359478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.359982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.359998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.360011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.360026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.360039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.360053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.360066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.360081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.360094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.360108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.360122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.360136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.349 [2024-07-22 12:24:29.360149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.349 [2024-07-22 12:24:29.360163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:29.360400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.350 [2024-07-22 12:24:29.360444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.350 [2024-07-22 12:24:29.360456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78680 len:8 PRP1 0x0 PRP2 0x0 00:29:36.350 [2024-07-22 12:24:29.360468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360528] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22ed000 was disconnected and freed. reset controller. 00:29:36.350 [2024-07-22 12:24:29.360546] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:36.350 [2024-07-22 12:24:29.360580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.350 [2024-07-22 12:24:29.360619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.350 [2024-07-22 12:24:29.360649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.350 [2024-07-22 12:24:29.360676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.350 [2024-07-22 12:24:29.360702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:29.360715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.350 [2024-07-22 12:24:29.360775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c4850 (9): Bad file descriptor 00:29:36.350 [2024-07-22 12:24:29.364048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.350 [2024-07-22 12:24:29.483158] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:36.350 [2024-07-22 12:24:32.990059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.990981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.990995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.350 [2024-07-22 12:24:32.991405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:32.991432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:32.991460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.350 [2024-07-22 12:24:32.991474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.350 [2024-07-22 12:24:32.991487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.991654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.991684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.991712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.991978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.991991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.351 [2024-07-22 12:24:32.992661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.992981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.992994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.351 [2024-07-22 12:24:32.993596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.351 [2024-07-22 12:24:32.993646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.993703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.993716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.993753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.993764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.993801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.993811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.993847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.993858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.993895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.993906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.993958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.993969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.993980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.993993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.994019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.994031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.994043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.994066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.994078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.994091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.994114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.994125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.994138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.994161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.994172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.994184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.352 [2024-07-22 12:24:32.994207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.352 [2024-07-22 12:24:32.994218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:29:36.352 [2024-07-22 12:24:32.994230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994287] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22e8550 was disconnected and freed. reset controller. 00:29:36.352 [2024-07-22 12:24:32.994306] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:36.352 [2024-07-22 12:24:32.994354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.352 [2024-07-22 12:24:32.994373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.352 [2024-07-22 12:24:32.994402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.352 [2024-07-22 12:24:32.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.352 [2024-07-22 12:24:32.994454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:32.994472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.352 [2024-07-22 12:24:32.994527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c4850 (9): Bad file descriptor 00:29:36.352 [2024-07-22 12:24:32.997795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.352 [2024-07-22 12:24:33.073366] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:36.352 [2024-07-22 12:24:37.554979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.555430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.555487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.555514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.555975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.555990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.556003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.556031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.556058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.556085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.352 [2024-07-22 12:24:37.556115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.352 [2024-07-22 12:24:37.556347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.352 [2024-07-22 12:24:37.556359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.556985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.556998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.353 [2024-07-22 12:24:37.557467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39064 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39072 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39080 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39088 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39096 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39104 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39112 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39120 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.557951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39128 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.557963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.557980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.557991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.558002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39136 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.558014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.558027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.558037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.558048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39144 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.558060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.558072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.558083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.558094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39152 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.558106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.558119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.558129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.558139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39160 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.558152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.558165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.558175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.558186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39168 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.558198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.558210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.558227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.353 [2024-07-22 12:24:37.558238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39176 len:8 PRP1 0x0 PRP2 0x0 00:29:36.353 [2024-07-22 12:24:37.558250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.353 [2024-07-22 12:24:37.558262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.353 [2024-07-22 12:24:37.558272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39184 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39192 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39200 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39208 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39216 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39224 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39232 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39240 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39248 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39256 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39264 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39272 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39280 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39288 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.558961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.558972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.558982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39296 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.558994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39304 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39312 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39320 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39328 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39336 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39344 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39352 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39360 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39368 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39376 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.559494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.559505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39384 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.559517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.559529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.574315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.574347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39392 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.574363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.574378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:36.354 [2024-07-22 12:24:37.574388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:36.354 [2024-07-22 12:24:37.574399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39400 len:8 PRP1 0x0 PRP2 0x0 00:29:36.354 [2024-07-22 12:24:37.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.574479] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22e8350 was disconnected and freed. reset controller. 00:29:36.354 [2024-07-22 12:24:37.574499] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:36.354 [2024-07-22 12:24:37.574553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.354 [2024-07-22 12:24:37.574573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.574589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.354 [2024-07-22 12:24:37.574602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.574708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.354 [2024-07-22 12:24:37.574729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.574743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.354 [2024-07-22 12:24:37.574756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.354 [2024-07-22 12:24:37.574769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.354 [2024-07-22 12:24:37.574833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c4850 (9): Bad file descriptor 00:29:36.354 [2024-07-22 12:24:37.578132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.354 [2024-07-22 12:24:37.744210] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:36.354 00:29:36.354 Latency(us) 00:29:36.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.354 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:36.354 Verification LBA range: start 0x0 length 0x4000 00:29:36.354 NVMe0n1 : 15.01 8442.41 32.98 932.27 0.00 13626.51 782.79 29709.65 00:29:36.354 =================================================================================================================== 00:29:36.354 Total : 8442.41 32.98 932.27 0.00 13626.51 782.79 29709.65 00:29:36.354 Received shutdown signal, test time was about 15.000000 seconds 00:29:36.354 00:29:36.354 Latency(us) 00:29:36.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.354 =================================================================================================================== 00:29:36.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1106756 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1106756 /var/tmp/bdevperf.sock 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1106756 ']' 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:36.354 12:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:36.354 [2024-07-22 12:24:44.089584] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.354 12:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:36.611 [2024-07-22 12:24:44.334266] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:36.611 12:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:37.174 NVMe0n1 00:29:37.174 12:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:37.431 00:29:37.431 12:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:37.688 00:29:37.688 12:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:37.688 12:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:37.944 12:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.201 12:24:45 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:41.501 12:24:48 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:41.501 12:24:48 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:41.501 12:24:49 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1107425 00:29:41.501 12:24:49 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:41.501 12:24:49 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1107425 00:29:42.432 0 00:29:42.691 12:24:50 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:42.691 [2024-07-22 12:24:43.598850] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:29:42.691 [2024-07-22 12:24:43.598938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106756 ] 00:29:42.691 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.691 [2024-07-22 12:24:43.630525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:42.691 [2024-07-22 12:24:43.658388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.691 [2024-07-22 12:24:43.741296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.691 [2024-07-22 12:24:45.907666] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:42.691 [2024-07-22 12:24:45.907746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.691 [2024-07-22 12:24:45.907769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.691 [2024-07-22 12:24:45.907787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.691 [2024-07-22 12:24:45.907800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.691 [2024-07-22 12:24:45.907814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.691 [2024-07-22 12:24:45.907828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.691 [2024-07-22 12:24:45.907842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.691 [2024-07-22 12:24:45.907855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.691 [2024-07-22 12:24:45.907870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.691 [2024-07-22 12:24:45.907920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.691 [2024-07-22 12:24:45.907954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe79850 (9): Bad file descriptor 00:29:42.691 [2024-07-22 12:24:46.049839] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:42.691 Running I/O for 1 seconds... 00:29:42.691 00:29:42.691 Latency(us) 00:29:42.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.691 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.691 Verification LBA range: start 0x0 length 0x4000 00:29:42.691 NVMe0n1 : 1.01 8546.60 33.39 0.00 0.00 14915.40 2888.44 13010.11 00:29:42.691 =================================================================================================================== 00:29:42.691 Total : 8546.60 33.39 0.00 0.00 14915.40 2888.44 13010.11 00:29:42.691 12:24:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:42.691 12:24:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:42.691 12:24:50 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.949 12:24:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:42.949 12:24:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:43.206 12:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:43.464 12:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1106756 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1106756 ']' 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1106756 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1106756 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1106756' 00:29:46.756 killing process with pid 1106756 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1106756 00:29:46.756 12:24:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1106756 00:29:47.014 12:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:47.014 12:24:54 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.271 rmmod nvme_tcp 00:29:47.271 rmmod nvme_fabrics 00:29:47.271 rmmod nvme_keyring 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1104611 ']' 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1104611 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1104611 ']' 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1104611 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1104611 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1104611' 00:29:47.271 killing process with pid 1104611 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1104611 00:29:47.271 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1104611 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.528 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.529 12:24:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.055 12:24:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:50.055 00:29:50.055 real 0m34.863s 00:29:50.055 user 2m2.532s 00:29:50.055 sys 0m6.062s 00:29:50.055 12:24:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.055 12:24:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:50.055 ************************************ 00:29:50.055 END TEST nvmf_failover 00:29:50.055 ************************************ 00:29:50.055 12:24:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:50.055 12:24:57 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:50.055 12:24:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:50.055 12:24:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.055 12:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.055 ************************************ 00:29:50.055 START TEST nvmf_host_discovery 00:29:50.055 ************************************ 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:50.055 * Looking for test storage... 00:29:50.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.055 12:24:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:29:50.056 12:24:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:51.948 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:51.948 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:51.948 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:51.948 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:51.948 00:29:51.948 --- 10.0.0.2 ping statistics --- 00:29:51.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.948 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:51.948 00:29:51.948 --- 10.0.0.1 ping statistics --- 00:29:51.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.948 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.948 12:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1110017 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1110017 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1110017 ']' 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.949 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.949 [2024-07-22 12:24:59.710309] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:29:51.949 [2024-07-22 12:24:59.710400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.949 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.949 [2024-07-22 12:24:59.748505] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:51.949 [2024-07-22 12:24:59.781105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.949 [2024-07-22 12:24:59.873201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.949 [2024-07-22 12:24:59.873264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.949 [2024-07-22 12:24:59.873299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.949 [2024-07-22 12:24:59.873314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.949 [2024-07-22 12:24:59.873325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.949 [2024-07-22 12:24:59.873375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.206 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.206 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:52.206 12:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.206 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.206 12:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.206 [2024-07-22 12:25:00.016305] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.206 [2024-07-22 12:25:00.024477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.206 null0 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.206 null1 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1110153 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1110153 /tmp/host.sock 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1110153 ']' 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:52.206 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.206 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.207 [2024-07-22 12:25:00.096394] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:29:52.207 [2024-07-22 12:25:00.096484] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110153 ] 00:29:52.207 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.207 [2024-07-22 12:25:00.128978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:52.463 [2024-07-22 12:25:00.161451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.463 [2024-07-22 12:25:00.253035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:52.463 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.464 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.720 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 [2024-07-22 12:25:00.666241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:52.978 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:29:52.979 12:25:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:29:53.543 [2024-07-22 12:25:01.435791] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:53.543 [2024-07-22 12:25:01.435829] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:53.543 [2024-07-22 12:25:01.435859] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:53.800 [2024-07-22 12:25:01.522139] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:54.057 [2024-07-22 12:25:01.746516] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:54.057 [2024-07-22 12:25:01.746544] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.057 12:25:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.315 12:25:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 [2024-07-22 12:25:02.114415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:54.315 [2024-07-22 12:25:02.115384] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:54.315 [2024-07-22 12:25:02.115421] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.315 [2024-07-22 12:25:02.202101] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:54.315 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.572 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:54.572 12:25:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:29:54.829 [2024-07-22 12:25:02.510485] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:54.829 [2024-07-22 12:25:02.510511] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:54.829 [2024-07-22 12:25:02.510521] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:55.392 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.650 [2024-07-22 12:25:03.338218] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:55.650 [2024-07-22 12:25:03.338250] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:55.650 [2024-07-22 12:25:03.347557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.650 [2024-07-22 12:25:03.347611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.650 [2024-07-22 12:25:03.347635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.650 [2024-07-22 12:25:03.347664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.650 [2024-07-22 12:25:03.347678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.650 [2024-07-22 12:25:03.347691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.650 [2024-07-22 12:25:03.347705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.650 [2024-07-22 12:25:03.347717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.650 [2024-07-22 12:25:03.347731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.650 [2024-07-22 12:25:03.357568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.650 [2024-07-22 12:25:03.367636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.650 [2024-07-22 12:25:03.367870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-07-22 12:25:03.367910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f6d0 with addr=10.0.0.2, port=4420 00:29:55.650 [2024-07-22 12:25:03.367928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.650 [2024-07-22 12:25:03.367951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.650 [2024-07-22 12:25:03.367998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:55.650 [2024-07-22 12:25:03.368013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:55.650 [2024-07-22 12:25:03.368027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:55.650 [2024-07-22 12:25:03.368072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.650 [2024-07-22 12:25:03.377734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.650 [2024-07-22 12:25:03.377896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-07-22 12:25:03.377934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f6d0 with addr=10.0.0.2, port=4420 00:29:55.650 [2024-07-22 12:25:03.377950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.650 [2024-07-22 12:25:03.377971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.650 [2024-07-22 12:25:03.378004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:55.650 [2024-07-22 12:25:03.378021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:55.650 [2024-07-22 12:25:03.378035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:55.650 [2024-07-22 12:25:03.378054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:55.650 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:55.650 [2024-07-22 12:25:03.387806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.650 [2024-07-22 12:25:03.388020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-07-22 12:25:03.388052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f6d0 with addr=10.0.0.2, port=4420 00:29:55.650 [2024-07-22 12:25:03.388077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.650 [2024-07-22 12:25:03.388104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.650 [2024-07-22 12:25:03.388127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:55.650 [2024-07-22 12:25:03.388142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:55.650 [2024-07-22 12:25:03.388157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:55.650 [2024-07-22 12:25:03.388178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.650 [2024-07-22 12:25:03.397881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.650 [2024-07-22 12:25:03.398092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-07-22 12:25:03.398123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f6d0 with addr=10.0.0.2, port=4420 00:29:55.650 [2024-07-22 12:25:03.398141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.650 [2024-07-22 12:25:03.398181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.650 [2024-07-22 12:25:03.398221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:55.651 [2024-07-22 12:25:03.398241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:55.651 [2024-07-22 12:25:03.398257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:55.651 [2024-07-22 12:25:03.398278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.651 [2024-07-22 12:25:03.407993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.651 [2024-07-22 12:25:03.408175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-07-22 12:25:03.408202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f6d0 with addr=10.0.0.2, port=4420 00:29:55.651 [2024-07-22 12:25:03.408218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.651 [2024-07-22 12:25:03.408253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.651 [2024-07-22 12:25:03.408276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:55.651 [2024-07-22 12:25:03.408290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:55.651 [2024-07-22 12:25:03.408303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:55.651 [2024-07-22 12:25:03.408322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.651 [2024-07-22 12:25:03.418065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:55.651 [2024-07-22 12:25:03.418276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-07-22 12:25:03.418304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f6d0 with addr=10.0.0.2, port=4420 00:29:55.651 [2024-07-22 12:25:03.418321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f6d0 is same with the state(5) to be set 00:29:55.651 [2024-07-22 12:25:03.418343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f6d0 (9): Bad file descriptor 00:29:55.651 [2024-07-22 12:25:03.418375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:55.651 [2024-07-22 12:25:03.418398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:55.651 [2024-07-22 12:25:03.418427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:55.651 [2024-07-22 12:25:03.418447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.651 [2024-07-22 12:25:03.424120] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:55.651 [2024-07-22 12:25:03.424153] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:55.651 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.934 12:25:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.880 [2024-07-22 12:25:04.713443] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:56.880 [2024-07-22 12:25:04.713484] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:56.880 [2024-07-22 12:25:04.713509] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:57.138 [2024-07-22 12:25:04.840898] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:57.138 [2024-07-22 12:25:04.907975] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:57.138 [2024-07-22 12:25:04.908026] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.138 request: 00:29:57.138 { 00:29:57.138 "name": "nvme", 00:29:57.138 "trtype": "tcp", 00:29:57.138 "traddr": "10.0.0.2", 00:29:57.138 "adrfam": "ipv4", 00:29:57.138 "trsvcid": "8009", 00:29:57.138 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:57.138 "wait_for_attach": true, 00:29:57.138 "method": "bdev_nvme_start_discovery", 00:29:57.138 "req_id": 1 00:29:57.138 } 00:29:57.138 Got JSON-RPC error response 00:29:57.138 response: 00:29:57.138 { 00:29:57.138 "code": -17, 00:29:57.138 "message": "File exists" 00:29:57.138 } 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:57.138 12:25:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.138 request: 00:29:57.138 { 00:29:57.138 "name": "nvme_second", 00:29:57.138 "trtype": "tcp", 00:29:57.138 "traddr": "10.0.0.2", 00:29:57.138 "adrfam": "ipv4", 00:29:57.138 "trsvcid": "8009", 00:29:57.138 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:57.138 "wait_for_attach": true, 00:29:57.138 "method": "bdev_nvme_start_discovery", 00:29:57.138 "req_id": 1 00:29:57.138 } 00:29:57.138 Got JSON-RPC error response 00:29:57.138 response: 00:29:57.138 { 00:29:57.138 "code": -17, 00:29:57.138 "message": "File exists" 00:29:57.138 } 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:57.138 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:57.139 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.397 12:25:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.331 [2024-07-22 12:25:06.131550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-07-22 12:25:06.131645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68bbc0 with addr=10.0.0.2, port=8010 00:29:58.331 [2024-07-22 12:25:06.131695] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:58.331 [2024-07-22 12:25:06.131719] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:58.331 [2024-07-22 12:25:06.131732] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:59.264 [2024-07-22 12:25:07.133917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.264 [2024-07-22 12:25:07.133958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6da0c0 with addr=10.0.0.2, port=8010 00:29:59.264 [2024-07-22 12:25:07.133980] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:59.264 [2024-07-22 12:25:07.133995] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:59.264 [2024-07-22 12:25:07.134031] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:00.639 [2024-07-22 12:25:08.136161] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:00.639 request: 00:30:00.639 { 00:30:00.639 "name": "nvme_second", 00:30:00.639 "trtype": "tcp", 00:30:00.639 "traddr": "10.0.0.2", 00:30:00.639 "adrfam": "ipv4", 00:30:00.639 "trsvcid": "8010", 00:30:00.639 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:00.639 "wait_for_attach": false, 00:30:00.639 "attach_timeout_ms": 3000, 00:30:00.639 "method": "bdev_nvme_start_discovery", 00:30:00.639 "req_id": 1 00:30:00.639 } 00:30:00.639 Got JSON-RPC error response 00:30:00.639 response: 00:30:00.639 { 00:30:00.639 "code": -110, 00:30:00.639 "message": "Connection timed out" 00:30:00.639 } 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1110153 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.639 rmmod nvme_tcp 00:30:00.639 rmmod nvme_fabrics 00:30:00.639 rmmod nvme_keyring 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1110017 ']' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1110017 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1110017 ']' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1110017 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110017 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110017' 00:30:00.639 killing process with pid 1110017 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1110017 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1110017 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.639 12:25:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:03.169 00:30:03.169 real 0m13.065s 00:30:03.169 user 0m18.927s 00:30:03.169 sys 0m2.753s 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.169 ************************************ 00:30:03.169 END TEST nvmf_host_discovery 00:30:03.169 ************************************ 00:30:03.169 12:25:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:03.169 12:25:10 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:03.169 12:25:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:03.169 12:25:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.169 12:25:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.169 ************************************ 00:30:03.169 START TEST nvmf_host_multipath_status 00:30:03.169 ************************************ 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:03.169 * Looking for test storage... 00:30:03.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.169 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:03.170 12:25:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:30:05.072 00:30:05.072 --- 10.0.0.2 ping statistics --- 00:30:05.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.072 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:30:05.072 00:30:05.072 --- 10.0.0.1 ping statistics --- 00:30:05.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.072 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1113202 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1113202 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1113202 ']' 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.072 12:25:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:05.072 [2024-07-22 12:25:12.882769] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:30:05.072 [2024-07-22 12:25:12.882866] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.072 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.072 [2024-07-22 12:25:12.922461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:05.072 [2024-07-22 12:25:12.948745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.329 [2024-07-22 12:25:13.039900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.329 [2024-07-22 12:25:13.039952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.329 [2024-07-22 12:25:13.039982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.329 [2024-07-22 12:25:13.039994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.329 [2024-07-22 12:25:13.040004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.329 [2024-07-22 12:25:13.040086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.329 [2024-07-22 12:25:13.040091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.329 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.329 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:05.329 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:05.329 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.329 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:05.329 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.330 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1113202 00:30:05.330 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:05.586 [2024-07-22 12:25:13.457937] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.586 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:06.157 Malloc0 00:30:06.157 12:25:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:06.157 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.414 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.672 [2024-07-22 12:25:14.558353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.672 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:06.931 [2024-07-22 12:25:14.851358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1113480 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1113480 /var/tmp/bdevperf.sock 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1113480 ']' 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:07.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.190 12:25:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:07.447 12:25:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.447 12:25:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:07.447 12:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:07.706 12:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:07.964 Nvme0n1 00:30:07.964 12:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:08.532 Nvme0n1 00:30:08.532 12:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:08.532 12:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:10.434 12:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:10.434 12:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:10.691 12:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:10.949 12:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:11.916 12:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:11.916 12:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:11.916 12:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.916 12:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:12.173 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.173 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:12.173 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.173 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:12.431 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:12.431 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:12.431 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.431 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:12.688 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.688 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:12.688 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.688 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:12.945 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.945 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:12.945 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.945 12:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:13.200 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.200 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:13.200 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.200 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:13.457 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.457 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:13.457 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:13.714 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:13.970 12:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:15.339 12:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:15.339 12:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:15.339 12:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.339 12:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:15.339 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:15.339 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:15.339 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.339 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:15.596 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.596 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:15.596 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.596 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:15.853 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.853 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:15.853 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.853 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:16.110 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.110 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:16.110 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:16.110 12:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.367 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.367 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:16.367 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.367 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:16.624 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.624 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:16.624 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:16.881 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:17.140 12:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:18.076 12:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:18.076 12:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.076 12:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.076 12:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.333 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.333 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:18.333 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.333 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.591 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.591 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.591 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.591 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:18.848 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.848 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:18.848 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.848 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.105 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.105 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.105 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.105 12:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.362 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.362 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:19.362 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.362 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.620 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.620 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:19.620 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:19.877 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:20.136 12:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:21.072 12:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:21.072 12:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:21.072 12:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.072 12:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.330 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.330 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:21.330 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.330 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.587 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.587 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.587 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.587 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.844 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.844 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.844 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.844 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:22.102 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.102 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:22.102 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.102 12:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.360 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.360 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:22.360 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.360 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.618 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.618 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:22.618 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:22.876 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:23.134 12:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:24.070 12:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:24.070 12:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:24.070 12:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.070 12:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:24.328 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.328 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:24.328 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.328 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:24.586 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.586 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:24.586 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.586 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:24.844 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.844 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:24.844 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.844 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.101 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.101 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:25.101 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.101 12:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:25.359 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.359 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:25.359 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.359 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:25.616 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.616 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:25.616 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:25.884 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:26.150 12:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:27.106 12:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:27.106 12:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:27.106 12:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.106 12:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.363 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.363 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:27.363 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.363 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.620 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.620 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.620 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.620 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:27.877 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.877 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:27.877 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.877 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.134 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.134 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:28.134 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.134 12:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.392 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.392 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:28.392 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.392 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.650 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.650 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:28.908 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:28.908 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:29.166 12:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:29.425 12:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:30.361 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:30.361 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:30.361 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.361 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.619 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.619 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:30.619 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.619 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.877 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.877 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.877 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.877 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.135 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.135 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.135 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.136 12:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.393 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.393 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:31.393 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.393 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.652 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.652 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.652 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.652 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.910 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.910 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:31.910 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:32.167 12:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:32.426 12:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:33.360 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:33.360 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:33.360 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.360 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.618 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.618 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:33.618 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.618 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.877 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.877 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.877 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.877 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.135 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.135 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.135 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.135 12:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.394 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.394 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.394 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.394 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.652 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.652 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.652 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.652 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.911 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.911 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:34.911 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:35.167 12:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:35.426 12:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:36.357 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:36.357 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:36.357 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.357 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.614 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.614 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:36.614 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.614 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.871 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.871 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.871 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.871 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.128 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.128 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.128 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.128 12:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.386 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.386 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:37.386 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.386 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.643 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.643 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.643 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.643 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.900 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.900 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:37.900 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:38.156 12:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:38.414 12:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:39.347 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:39.347 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.347 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.347 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.604 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.604 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:39.604 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.604 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.861 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.861 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.861 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.861 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.122 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.122 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.122 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.122 12:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.379 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.379 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.379 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.379 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.636 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.636 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:40.636 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.636 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.894 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1113480 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1113480 ']' 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1113480 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113480 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113480' 00:30:40.895 killing process with pid 1113480 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1113480 00:30:40.895 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1113480 00:30:40.895 Connection closed with partial response: 00:30:40.895 00:30:40.895 00:30:41.210 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1113480 00:30:41.210 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.210 [2024-07-22 12:25:14.913026] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:30:41.210 [2024-07-22 12:25:14.913107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113480 ] 00:30:41.210 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.210 [2024-07-22 12:25:14.944284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:41.210 [2024-07-22 12:25:14.971705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.210 [2024-07-22 12:25:15.057023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.210 Running I/O for 90 seconds... 00:30:41.210 [2024-07-22 12:25:30.615118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.210 [2024-07-22 12:25:30.615181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.615861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.615877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.616947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.616977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:41.210 [2024-07-22 12:25:30.617716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.210 [2024-07-22 12:25:30.617733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.617757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.617773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.617797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.617813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.617842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.617859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.617899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.617938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.617954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.618958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.618997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:41.211 [2024-07-22 12:25:30.619710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.211 [2024-07-22 12:25:30.619728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.619756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.619801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.619817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.619846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.619862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.619890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.619951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.619967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.619995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.620969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.620985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.212 [2024-07-22 12:25:30.621506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:30.621577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:30.621593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:41.212 [2024-07-22 12:25:46.139695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.212 [2024-07-22 12:25:46.139767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.139828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.139850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.139875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.139891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.139914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.139935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.139958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.139989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.140034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.140074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.140982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.140998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.141037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.141074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.213 [2024-07-22 12:25:46.141111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.141148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.141186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.213 [2024-07-22 12:25:46.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:41.213 [2024-07-22 12:25:46.141297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.141348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.141610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.141635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.143478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.143531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.143567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.143604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.143670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.143693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.143715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.144722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.144760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.214 [2024-07-22 12:25:46.144796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.214 [2024-07-22 12:25:46.144909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:41.214 [2024-07-22 12:25:46.144947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.215 [2024-07-22 12:25:46.144962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:41.215 Received shutdown signal, test time was about 32.255219 seconds 00:30:41.215 00:30:41.215 Latency(us) 00:30:41.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.215 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:41.215 Verification LBA range: start 0x0 length 0x4000 00:30:41.215 Nvme0n1 : 32.25 7956.58 31.08 0.00 0.00 16062.02 1468.49 4026531.84 00:30:41.215 =================================================================================================================== 00:30:41.215 Total : 7956.58 31.08 0.00 0.00 16062.02 1468.49 4026531.84 00:30:41.215 12:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.472 rmmod nvme_tcp 00:30:41.472 rmmod nvme_fabrics 00:30:41.472 rmmod nvme_keyring 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1113202 ']' 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1113202 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1113202 ']' 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1113202 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113202 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113202' 00:30:41.472 killing process with pid 1113202 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1113202 00:30:41.472 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1113202 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.729 12:25:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.633 12:25:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.633 00:30:43.633 real 0m40.875s 00:30:43.633 user 2m2.913s 00:30:43.633 sys 0m10.618s 00:30:43.633 12:25:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.633 12:25:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.633 ************************************ 00:30:43.633 END TEST nvmf_host_multipath_status 00:30:43.633 ************************************ 00:30:43.633 12:25:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:43.633 12:25:51 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:43.633 12:25:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:43.633 12:25:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.633 12:25:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.892 ************************************ 00:30:43.892 START TEST nvmf_discovery_remove_ifc 00:30:43.892 ************************************ 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:43.892 * Looking for test storage... 00:30:43.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:43.892 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:30:43.893 12:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.792 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:45.793 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:45.793 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:45.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:45.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:45.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:30:45.793 00:30:45.793 --- 10.0.0.2 ping statistics --- 00:30:45.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.793 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:30:45.793 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:30:45.793 00:30:45.793 --- 10.0.0.1 ping statistics --- 00:30:45.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.793 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1119545 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1119545 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1119545 ']' 00:30:46.050 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.051 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.051 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.051 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.051 12:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.051 [2024-07-22 12:25:53.795241] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:30:46.051 [2024-07-22 12:25:53.795325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.051 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.051 [2024-07-22 12:25:53.831545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:46.051 [2024-07-22 12:25:53.861504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.051 [2024-07-22 12:25:53.953426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.051 [2024-07-22 12:25:53.953472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.051 [2024-07-22 12:25:53.953499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.051 [2024-07-22 12:25:53.953513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.051 [2024-07-22 12:25:53.953525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.051 [2024-07-22 12:25:53.953561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.309 [2024-07-22 12:25:54.109562] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.309 [2024-07-22 12:25:54.117786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:46.309 null0 00:30:46.309 [2024-07-22 12:25:54.149716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1119629 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1119629 /tmp/host.sock 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1119629 ']' 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:46.309 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.309 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.309 [2024-07-22 12:25:54.214871] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:30:46.309 [2024-07-22 12:25:54.214955] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119629 ] 00:30:46.568 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.568 [2024-07-22 12:25:54.248072] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:46.568 [2024-07-22 12:25:54.277851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.568 [2024-07-22 12:25:54.368348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.568 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.568 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:30:46.568 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.569 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.828 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.828 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:46.828 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.828 12:25:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.761 [2024-07-22 12:25:55.638427] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:47.761 [2024-07-22 12:25:55.638456] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:47.761 [2024-07-22 12:25:55.638482] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.018 [2024-07-22 12:25:55.766904] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:48.277 [2024-07-22 12:25:55.991249] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:48.277 [2024-07-22 12:25:55.991322] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:48.277 [2024-07-22 12:25:55.991366] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:48.277 [2024-07-22 12:25:55.991393] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.277 [2024-07-22 12:25:55.991424] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.277 12:25:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.277 [2024-07-22 12:25:55.996255] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x124e370 was disconnected and freed. delete nvme_qpair. 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:48.277 12:25:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:49.214 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:49.471 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.471 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:49.471 12:25:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:50.404 12:25:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:51.340 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.340 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.340 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.340 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.340 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.341 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.341 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.341 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.341 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:51.341 12:25:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:52.714 12:26:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:53.649 12:26:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.649 [2024-07-22 12:26:01.432853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:53.649 [2024-07-22 12:26:01.432935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.649 [2024-07-22 12:26:01.432983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.649 [2024-07-22 12:26:01.433003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.649 [2024-07-22 12:26:01.433018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.649 [2024-07-22 12:26:01.433033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.649 [2024-07-22 12:26:01.433047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.649 [2024-07-22 12:26:01.433062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.649 [2024-07-22 12:26:01.433078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.649 [2024-07-22 12:26:01.433094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.649 [2024-07-22 12:26:01.433109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.649 [2024-07-22 12:26:01.433125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214d60 is same with the state(5) to be set 00:30:53.649 [2024-07-22 12:26:01.442870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214d60 (9): Bad file descriptor 00:30:53.649 [2024-07-22 12:26:01.452919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.586 [2024-07-22 12:26:02.504676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:54.586 [2024-07-22 12:26:02.504753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1214d60 with addr=10.0.0.2, port=4420 00:30:54.586 [2024-07-22 12:26:02.504783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214d60 is same with the state(5) to be set 00:30:54.586 [2024-07-22 12:26:02.504840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214d60 (9): Bad file descriptor 00:30:54.586 [2024-07-22 12:26:02.505346] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:54.586 [2024-07-22 12:26:02.505382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.586 [2024-07-22 12:26:02.505411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.586 [2024-07-22 12:26:02.505432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.586 [2024-07-22 12:26:02.505474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.586 [2024-07-22 12:26:02.505497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.586 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.845 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:54.845 12:26:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.777 [2024-07-22 12:26:03.508003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:55.777 [2024-07-22 12:26:03.508036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:55.777 [2024-07-22 12:26:03.508052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:55.777 [2024-07-22 12:26:03.508066] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:55.777 [2024-07-22 12:26:03.508088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.777 [2024-07-22 12:26:03.508141] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:55.777 [2024-07-22 12:26:03.508194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.777 [2024-07-22 12:26:03.508218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.777 [2024-07-22 12:26:03.508239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.777 [2024-07-22 12:26:03.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.777 [2024-07-22 12:26:03.508271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.777 [2024-07-22 12:26:03.508287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.777 [2024-07-22 12:26:03.508302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.777 [2024-07-22 12:26:03.508319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.777 [2024-07-22 12:26:03.508336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.777 [2024-07-22 12:26:03.508351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.777 [2024-07-22 12:26:03.508365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:55.777 [2024-07-22 12:26:03.508627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214210 (9): Bad file descriptor 00:30:55.777 [2024-07-22 12:26:03.509666] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:55.777 [2024-07-22 12:26:03.509688] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:55.777 12:26:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:56.710 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:56.710 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.710 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:56.969 12:26:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:57.948 [2024-07-22 12:26:05.523388] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:57.948 [2024-07-22 12:26:05.523427] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:57.948 [2024-07-22 12:26:05.523451] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.948 [2024-07-22 12:26:05.610760] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:57.948 [2024-07-22 12:26:05.674577] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:57.948 [2024-07-22 12:26:05.674651] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:57.948 [2024-07-22 12:26:05.674696] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:57.948 [2024-07-22 12:26:05.674719] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:57.948 [2024-07-22 12:26:05.674733] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:57.948 [2024-07-22 12:26:05.682281] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12225b0 was disconnected and freed. delete nvme_qpair. 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1119629 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1119629 ']' 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1119629 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119629 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119629' 00:30:57.948 killing process with pid 1119629 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1119629 00:30:57.948 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1119629 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:58.208 12:26:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:58.208 rmmod nvme_tcp 00:30:58.208 rmmod nvme_fabrics 00:30:58.208 rmmod nvme_keyring 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1119545 ']' 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1119545 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1119545 ']' 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1119545 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119545 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119545' 00:30:58.208 killing process with pid 1119545 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1119545 00:30:58.208 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1119545 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.467 12:26:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.001 12:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:01.001 00:31:01.001 real 0m16.755s 00:31:01.001 user 0m23.875s 00:31:01.001 sys 0m2.948s 00:31:01.001 12:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:01.001 12:26:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:01.001 ************************************ 00:31:01.001 END TEST nvmf_discovery_remove_ifc 00:31:01.001 ************************************ 00:31:01.001 12:26:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:01.001 12:26:08 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:01.001 12:26:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:01.001 12:26:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.001 12:26:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.001 ************************************ 00:31:01.001 START TEST nvmf_identify_kernel_target 00:31:01.001 ************************************ 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:01.001 * Looking for test storage... 00:31:01.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:01.001 12:26:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:02.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:02.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.905 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:02.906 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:02.906 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:02.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:31:02.906 00:31:02.906 --- 10.0.0.2 ping statistics --- 00:31:02.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.906 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:31:02.906 00:31:02.906 --- 10.0.0.1 ping statistics --- 00:31:02.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.906 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:02.906 12:26:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:03.843 Waiting for block devices as requested 00:31:03.843 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:04.101 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:04.101 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:04.360 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:04.360 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:04.360 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:04.360 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:04.619 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:04.619 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:04.619 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:04.619 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:04.877 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:04.877 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:04.877 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:04.877 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:04.877 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:05.136 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:05.136 12:26:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:05.136 No valid GPT data, bailing 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:05.137 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:05.397 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:05.397 00:31:05.397 Discovery Log Number of Records 2, Generation counter 2 00:31:05.397 =====Discovery Log Entry 0====== 00:31:05.397 trtype: tcp 00:31:05.397 adrfam: ipv4 00:31:05.397 subtype: current discovery subsystem 00:31:05.397 treq: not specified, sq flow control disable supported 00:31:05.397 portid: 1 00:31:05.397 trsvcid: 4420 00:31:05.397 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:05.397 traddr: 10.0.0.1 00:31:05.397 eflags: none 00:31:05.397 sectype: none 00:31:05.397 =====Discovery Log Entry 1====== 00:31:05.397 trtype: tcp 00:31:05.397 adrfam: ipv4 00:31:05.397 subtype: nvme subsystem 00:31:05.397 treq: not specified, sq flow control disable supported 00:31:05.397 portid: 1 00:31:05.397 trsvcid: 4420 00:31:05.397 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:05.397 traddr: 10.0.0.1 00:31:05.397 eflags: none 00:31:05.397 sectype: none 00:31:05.397 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:05.397 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:05.397 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.397 ===================================================== 00:31:05.397 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:05.397 ===================================================== 00:31:05.397 Controller Capabilities/Features 00:31:05.397 ================================ 00:31:05.397 Vendor ID: 0000 00:31:05.397 Subsystem Vendor ID: 0000 00:31:05.397 Serial Number: 6ad64bdc894870a5e93e 00:31:05.397 Model Number: Linux 00:31:05.397 Firmware Version: 6.7.0-68 00:31:05.397 Recommended Arb Burst: 0 00:31:05.397 IEEE OUI Identifier: 00 00 00 00:31:05.397 Multi-path I/O 00:31:05.397 May have multiple subsystem ports: No 00:31:05.397 May have multiple controllers: No 00:31:05.397 Associated with SR-IOV VF: No 00:31:05.397 Max Data Transfer Size: Unlimited 00:31:05.397 Max Number of Namespaces: 0 00:31:05.397 Max Number of I/O Queues: 1024 00:31:05.397 NVMe Specification Version (VS): 1.3 00:31:05.397 NVMe Specification Version (Identify): 1.3 00:31:05.397 Maximum Queue Entries: 1024 00:31:05.397 Contiguous Queues Required: No 00:31:05.397 Arbitration Mechanisms Supported 00:31:05.397 Weighted Round Robin: Not Supported 00:31:05.397 Vendor Specific: Not Supported 00:31:05.397 Reset Timeout: 7500 ms 00:31:05.397 Doorbell Stride: 4 bytes 00:31:05.397 NVM Subsystem Reset: Not Supported 00:31:05.397 Command Sets Supported 00:31:05.397 NVM Command Set: Supported 00:31:05.397 Boot Partition: Not Supported 00:31:05.397 Memory Page Size Minimum: 4096 bytes 00:31:05.397 Memory Page Size Maximum: 4096 bytes 00:31:05.397 Persistent Memory Region: Not Supported 00:31:05.397 Optional Asynchronous Events Supported 00:31:05.397 Namespace Attribute Notices: Not Supported 00:31:05.397 Firmware Activation Notices: Not Supported 00:31:05.397 ANA Change Notices: Not Supported 00:31:05.397 PLE Aggregate Log Change Notices: Not Supported 00:31:05.397 LBA Status Info Alert Notices: Not Supported 00:31:05.397 EGE Aggregate Log Change Notices: Not Supported 00:31:05.397 Normal NVM Subsystem Shutdown event: Not Supported 00:31:05.397 Zone Descriptor Change Notices: Not Supported 00:31:05.397 Discovery Log Change Notices: Supported 00:31:05.397 Controller Attributes 00:31:05.397 128-bit Host Identifier: Not Supported 00:31:05.397 Non-Operational Permissive Mode: Not Supported 00:31:05.397 NVM Sets: Not Supported 00:31:05.397 Read Recovery Levels: Not Supported 00:31:05.397 Endurance Groups: Not Supported 00:31:05.397 Predictable Latency Mode: Not Supported 00:31:05.397 Traffic Based Keep ALive: Not Supported 00:31:05.397 Namespace Granularity: Not Supported 00:31:05.397 SQ Associations: Not Supported 00:31:05.397 UUID List: Not Supported 00:31:05.397 Multi-Domain Subsystem: Not Supported 00:31:05.397 Fixed Capacity Management: Not Supported 00:31:05.397 Variable Capacity Management: Not Supported 00:31:05.397 Delete Endurance Group: Not Supported 00:31:05.397 Delete NVM Set: Not Supported 00:31:05.397 Extended LBA Formats Supported: Not Supported 00:31:05.397 Flexible Data Placement Supported: Not Supported 00:31:05.397 00:31:05.397 Controller Memory Buffer Support 00:31:05.397 ================================ 00:31:05.397 Supported: No 00:31:05.397 00:31:05.397 Persistent Memory Region Support 00:31:05.397 ================================ 00:31:05.397 Supported: No 00:31:05.397 00:31:05.397 Admin Command Set Attributes 00:31:05.397 ============================ 00:31:05.397 Security Send/Receive: Not Supported 00:31:05.397 Format NVM: Not Supported 00:31:05.397 Firmware Activate/Download: Not Supported 00:31:05.397 Namespace Management: Not Supported 00:31:05.397 Device Self-Test: Not Supported 00:31:05.397 Directives: Not Supported 00:31:05.397 NVMe-MI: Not Supported 00:31:05.397 Virtualization Management: Not Supported 00:31:05.397 Doorbell Buffer Config: Not Supported 00:31:05.397 Get LBA Status Capability: Not Supported 00:31:05.397 Command & Feature Lockdown Capability: Not Supported 00:31:05.397 Abort Command Limit: 1 00:31:05.397 Async Event Request Limit: 1 00:31:05.397 Number of Firmware Slots: N/A 00:31:05.397 Firmware Slot 1 Read-Only: N/A 00:31:05.397 Firmware Activation Without Reset: N/A 00:31:05.397 Multiple Update Detection Support: N/A 00:31:05.397 Firmware Update Granularity: No Information Provided 00:31:05.397 Per-Namespace SMART Log: No 00:31:05.397 Asymmetric Namespace Access Log Page: Not Supported 00:31:05.397 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:05.397 Command Effects Log Page: Not Supported 00:31:05.397 Get Log Page Extended Data: Supported 00:31:05.397 Telemetry Log Pages: Not Supported 00:31:05.397 Persistent Event Log Pages: Not Supported 00:31:05.397 Supported Log Pages Log Page: May Support 00:31:05.397 Commands Supported & Effects Log Page: Not Supported 00:31:05.397 Feature Identifiers & Effects Log Page:May Support 00:31:05.397 NVMe-MI Commands & Effects Log Page: May Support 00:31:05.397 Data Area 4 for Telemetry Log: Not Supported 00:31:05.397 Error Log Page Entries Supported: 1 00:31:05.397 Keep Alive: Not Supported 00:31:05.397 00:31:05.397 NVM Command Set Attributes 00:31:05.397 ========================== 00:31:05.397 Submission Queue Entry Size 00:31:05.397 Max: 1 00:31:05.397 Min: 1 00:31:05.397 Completion Queue Entry Size 00:31:05.397 Max: 1 00:31:05.397 Min: 1 00:31:05.397 Number of Namespaces: 0 00:31:05.397 Compare Command: Not Supported 00:31:05.397 Write Uncorrectable Command: Not Supported 00:31:05.397 Dataset Management Command: Not Supported 00:31:05.397 Write Zeroes Command: Not Supported 00:31:05.397 Set Features Save Field: Not Supported 00:31:05.397 Reservations: Not Supported 00:31:05.397 Timestamp: Not Supported 00:31:05.397 Copy: Not Supported 00:31:05.397 Volatile Write Cache: Not Present 00:31:05.397 Atomic Write Unit (Normal): 1 00:31:05.397 Atomic Write Unit (PFail): 1 00:31:05.397 Atomic Compare & Write Unit: 1 00:31:05.397 Fused Compare & Write: Not Supported 00:31:05.397 Scatter-Gather List 00:31:05.397 SGL Command Set: Supported 00:31:05.397 SGL Keyed: Not Supported 00:31:05.397 SGL Bit Bucket Descriptor: Not Supported 00:31:05.397 SGL Metadata Pointer: Not Supported 00:31:05.397 Oversized SGL: Not Supported 00:31:05.397 SGL Metadata Address: Not Supported 00:31:05.397 SGL Offset: Supported 00:31:05.397 Transport SGL Data Block: Not Supported 00:31:05.397 Replay Protected Memory Block: Not Supported 00:31:05.397 00:31:05.397 Firmware Slot Information 00:31:05.397 ========================= 00:31:05.397 Active slot: 0 00:31:05.397 00:31:05.397 00:31:05.397 Error Log 00:31:05.397 ========= 00:31:05.397 00:31:05.397 Active Namespaces 00:31:05.397 ================= 00:31:05.397 Discovery Log Page 00:31:05.397 ================== 00:31:05.397 Generation Counter: 2 00:31:05.397 Number of Records: 2 00:31:05.397 Record Format: 0 00:31:05.397 00:31:05.397 Discovery Log Entry 0 00:31:05.397 ---------------------- 00:31:05.397 Transport Type: 3 (TCP) 00:31:05.397 Address Family: 1 (IPv4) 00:31:05.397 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:05.397 Entry Flags: 00:31:05.397 Duplicate Returned Information: 0 00:31:05.397 Explicit Persistent Connection Support for Discovery: 0 00:31:05.397 Transport Requirements: 00:31:05.397 Secure Channel: Not Specified 00:31:05.397 Port ID: 1 (0x0001) 00:31:05.397 Controller ID: 65535 (0xffff) 00:31:05.397 Admin Max SQ Size: 32 00:31:05.397 Transport Service Identifier: 4420 00:31:05.397 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:05.397 Transport Address: 10.0.0.1 00:31:05.397 Discovery Log Entry 1 00:31:05.397 ---------------------- 00:31:05.397 Transport Type: 3 (TCP) 00:31:05.397 Address Family: 1 (IPv4) 00:31:05.397 Subsystem Type: 2 (NVM Subsystem) 00:31:05.397 Entry Flags: 00:31:05.397 Duplicate Returned Information: 0 00:31:05.397 Explicit Persistent Connection Support for Discovery: 0 00:31:05.397 Transport Requirements: 00:31:05.397 Secure Channel: Not Specified 00:31:05.397 Port ID: 1 (0x0001) 00:31:05.397 Controller ID: 65535 (0xffff) 00:31:05.398 Admin Max SQ Size: 32 00:31:05.398 Transport Service Identifier: 4420 00:31:05.398 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:05.398 Transport Address: 10.0.0.1 00:31:05.398 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:05.398 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.398 get_feature(0x01) failed 00:31:05.398 get_feature(0x02) failed 00:31:05.398 get_feature(0x04) failed 00:31:05.398 ===================================================== 00:31:05.398 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:05.398 ===================================================== 00:31:05.398 Controller Capabilities/Features 00:31:05.398 ================================ 00:31:05.398 Vendor ID: 0000 00:31:05.398 Subsystem Vendor ID: 0000 00:31:05.398 Serial Number: 4ce1160d80bb9e2f56ea 00:31:05.398 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:05.398 Firmware Version: 6.7.0-68 00:31:05.398 Recommended Arb Burst: 6 00:31:05.398 IEEE OUI Identifier: 00 00 00 00:31:05.398 Multi-path I/O 00:31:05.398 May have multiple subsystem ports: Yes 00:31:05.398 May have multiple controllers: Yes 00:31:05.398 Associated with SR-IOV VF: No 00:31:05.398 Max Data Transfer Size: Unlimited 00:31:05.398 Max Number of Namespaces: 1024 00:31:05.398 Max Number of I/O Queues: 128 00:31:05.398 NVMe Specification Version (VS): 1.3 00:31:05.398 NVMe Specification Version (Identify): 1.3 00:31:05.398 Maximum Queue Entries: 1024 00:31:05.398 Contiguous Queues Required: No 00:31:05.398 Arbitration Mechanisms Supported 00:31:05.398 Weighted Round Robin: Not Supported 00:31:05.398 Vendor Specific: Not Supported 00:31:05.398 Reset Timeout: 7500 ms 00:31:05.398 Doorbell Stride: 4 bytes 00:31:05.398 NVM Subsystem Reset: Not Supported 00:31:05.398 Command Sets Supported 00:31:05.398 NVM Command Set: Supported 00:31:05.398 Boot Partition: Not Supported 00:31:05.398 Memory Page Size Minimum: 4096 bytes 00:31:05.398 Memory Page Size Maximum: 4096 bytes 00:31:05.398 Persistent Memory Region: Not Supported 00:31:05.398 Optional Asynchronous Events Supported 00:31:05.398 Namespace Attribute Notices: Supported 00:31:05.398 Firmware Activation Notices: Not Supported 00:31:05.398 ANA Change Notices: Supported 00:31:05.398 PLE Aggregate Log Change Notices: Not Supported 00:31:05.398 LBA Status Info Alert Notices: Not Supported 00:31:05.398 EGE Aggregate Log Change Notices: Not Supported 00:31:05.398 Normal NVM Subsystem Shutdown event: Not Supported 00:31:05.398 Zone Descriptor Change Notices: Not Supported 00:31:05.398 Discovery Log Change Notices: Not Supported 00:31:05.398 Controller Attributes 00:31:05.398 128-bit Host Identifier: Supported 00:31:05.398 Non-Operational Permissive Mode: Not Supported 00:31:05.398 NVM Sets: Not Supported 00:31:05.398 Read Recovery Levels: Not Supported 00:31:05.398 Endurance Groups: Not Supported 00:31:05.398 Predictable Latency Mode: Not Supported 00:31:05.398 Traffic Based Keep ALive: Supported 00:31:05.398 Namespace Granularity: Not Supported 00:31:05.398 SQ Associations: Not Supported 00:31:05.398 UUID List: Not Supported 00:31:05.398 Multi-Domain Subsystem: Not Supported 00:31:05.398 Fixed Capacity Management: Not Supported 00:31:05.398 Variable Capacity Management: Not Supported 00:31:05.398 Delete Endurance Group: Not Supported 00:31:05.398 Delete NVM Set: Not Supported 00:31:05.398 Extended LBA Formats Supported: Not Supported 00:31:05.398 Flexible Data Placement Supported: Not Supported 00:31:05.398 00:31:05.398 Controller Memory Buffer Support 00:31:05.398 ================================ 00:31:05.398 Supported: No 00:31:05.398 00:31:05.398 Persistent Memory Region Support 00:31:05.398 ================================ 00:31:05.398 Supported: No 00:31:05.398 00:31:05.398 Admin Command Set Attributes 00:31:05.398 ============================ 00:31:05.398 Security Send/Receive: Not Supported 00:31:05.398 Format NVM: Not Supported 00:31:05.398 Firmware Activate/Download: Not Supported 00:31:05.398 Namespace Management: Not Supported 00:31:05.398 Device Self-Test: Not Supported 00:31:05.398 Directives: Not Supported 00:31:05.398 NVMe-MI: Not Supported 00:31:05.398 Virtualization Management: Not Supported 00:31:05.398 Doorbell Buffer Config: Not Supported 00:31:05.398 Get LBA Status Capability: Not Supported 00:31:05.398 Command & Feature Lockdown Capability: Not Supported 00:31:05.398 Abort Command Limit: 4 00:31:05.398 Async Event Request Limit: 4 00:31:05.398 Number of Firmware Slots: N/A 00:31:05.398 Firmware Slot 1 Read-Only: N/A 00:31:05.398 Firmware Activation Without Reset: N/A 00:31:05.398 Multiple Update Detection Support: N/A 00:31:05.398 Firmware Update Granularity: No Information Provided 00:31:05.398 Per-Namespace SMART Log: Yes 00:31:05.398 Asymmetric Namespace Access Log Page: Supported 00:31:05.398 ANA Transition Time : 10 sec 00:31:05.398 00:31:05.398 Asymmetric Namespace Access Capabilities 00:31:05.398 ANA Optimized State : Supported 00:31:05.398 ANA Non-Optimized State : Supported 00:31:05.398 ANA Inaccessible State : Supported 00:31:05.398 ANA Persistent Loss State : Supported 00:31:05.398 ANA Change State : Supported 00:31:05.398 ANAGRPID is not changed : No 00:31:05.398 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:05.398 00:31:05.398 ANA Group Identifier Maximum : 128 00:31:05.398 Number of ANA Group Identifiers : 128 00:31:05.398 Max Number of Allowed Namespaces : 1024 00:31:05.398 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:05.398 Command Effects Log Page: Supported 00:31:05.398 Get Log Page Extended Data: Supported 00:31:05.398 Telemetry Log Pages: Not Supported 00:31:05.398 Persistent Event Log Pages: Not Supported 00:31:05.398 Supported Log Pages Log Page: May Support 00:31:05.398 Commands Supported & Effects Log Page: Not Supported 00:31:05.398 Feature Identifiers & Effects Log Page:May Support 00:31:05.398 NVMe-MI Commands & Effects Log Page: May Support 00:31:05.398 Data Area 4 for Telemetry Log: Not Supported 00:31:05.398 Error Log Page Entries Supported: 128 00:31:05.398 Keep Alive: Supported 00:31:05.398 Keep Alive Granularity: 1000 ms 00:31:05.398 00:31:05.398 NVM Command Set Attributes 00:31:05.398 ========================== 00:31:05.398 Submission Queue Entry Size 00:31:05.398 Max: 64 00:31:05.398 Min: 64 00:31:05.398 Completion Queue Entry Size 00:31:05.398 Max: 16 00:31:05.398 Min: 16 00:31:05.398 Number of Namespaces: 1024 00:31:05.398 Compare Command: Not Supported 00:31:05.398 Write Uncorrectable Command: Not Supported 00:31:05.398 Dataset Management Command: Supported 00:31:05.398 Write Zeroes Command: Supported 00:31:05.398 Set Features Save Field: Not Supported 00:31:05.398 Reservations: Not Supported 00:31:05.398 Timestamp: Not Supported 00:31:05.398 Copy: Not Supported 00:31:05.398 Volatile Write Cache: Present 00:31:05.398 Atomic Write Unit (Normal): 1 00:31:05.398 Atomic Write Unit (PFail): 1 00:31:05.398 Atomic Compare & Write Unit: 1 00:31:05.398 Fused Compare & Write: Not Supported 00:31:05.398 Scatter-Gather List 00:31:05.398 SGL Command Set: Supported 00:31:05.398 SGL Keyed: Not Supported 00:31:05.398 SGL Bit Bucket Descriptor: Not Supported 00:31:05.398 SGL Metadata Pointer: Not Supported 00:31:05.398 Oversized SGL: Not Supported 00:31:05.398 SGL Metadata Address: Not Supported 00:31:05.398 SGL Offset: Supported 00:31:05.398 Transport SGL Data Block: Not Supported 00:31:05.398 Replay Protected Memory Block: Not Supported 00:31:05.398 00:31:05.398 Firmware Slot Information 00:31:05.398 ========================= 00:31:05.398 Active slot: 0 00:31:05.398 00:31:05.398 Asymmetric Namespace Access 00:31:05.398 =========================== 00:31:05.398 Change Count : 0 00:31:05.398 Number of ANA Group Descriptors : 1 00:31:05.398 ANA Group Descriptor : 0 00:31:05.398 ANA Group ID : 1 00:31:05.398 Number of NSID Values : 1 00:31:05.398 Change Count : 0 00:31:05.398 ANA State : 1 00:31:05.398 Namespace Identifier : 1 00:31:05.398 00:31:05.398 Commands Supported and Effects 00:31:05.398 ============================== 00:31:05.398 Admin Commands 00:31:05.398 -------------- 00:31:05.398 Get Log Page (02h): Supported 00:31:05.398 Identify (06h): Supported 00:31:05.398 Abort (08h): Supported 00:31:05.398 Set Features (09h): Supported 00:31:05.398 Get Features (0Ah): Supported 00:31:05.398 Asynchronous Event Request (0Ch): Supported 00:31:05.398 Keep Alive (18h): Supported 00:31:05.398 I/O Commands 00:31:05.398 ------------ 00:31:05.398 Flush (00h): Supported 00:31:05.398 Write (01h): Supported LBA-Change 00:31:05.398 Read (02h): Supported 00:31:05.398 Write Zeroes (08h): Supported LBA-Change 00:31:05.398 Dataset Management (09h): Supported 00:31:05.398 00:31:05.398 Error Log 00:31:05.398 ========= 00:31:05.398 Entry: 0 00:31:05.398 Error Count: 0x3 00:31:05.398 Submission Queue Id: 0x0 00:31:05.398 Command Id: 0x5 00:31:05.398 Phase Bit: 0 00:31:05.398 Status Code: 0x2 00:31:05.398 Status Code Type: 0x0 00:31:05.398 Do Not Retry: 1 00:31:05.657 Error Location: 0x28 00:31:05.657 LBA: 0x0 00:31:05.657 Namespace: 0x0 00:31:05.657 Vendor Log Page: 0x0 00:31:05.657 ----------- 00:31:05.657 Entry: 1 00:31:05.657 Error Count: 0x2 00:31:05.657 Submission Queue Id: 0x0 00:31:05.657 Command Id: 0x5 00:31:05.657 Phase Bit: 0 00:31:05.657 Status Code: 0x2 00:31:05.657 Status Code Type: 0x0 00:31:05.657 Do Not Retry: 1 00:31:05.657 Error Location: 0x28 00:31:05.657 LBA: 0x0 00:31:05.657 Namespace: 0x0 00:31:05.657 Vendor Log Page: 0x0 00:31:05.657 ----------- 00:31:05.657 Entry: 2 00:31:05.657 Error Count: 0x1 00:31:05.657 Submission Queue Id: 0x0 00:31:05.657 Command Id: 0x4 00:31:05.657 Phase Bit: 0 00:31:05.657 Status Code: 0x2 00:31:05.657 Status Code Type: 0x0 00:31:05.657 Do Not Retry: 1 00:31:05.657 Error Location: 0x28 00:31:05.657 LBA: 0x0 00:31:05.657 Namespace: 0x0 00:31:05.657 Vendor Log Page: 0x0 00:31:05.657 00:31:05.657 Number of Queues 00:31:05.657 ================ 00:31:05.658 Number of I/O Submission Queues: 128 00:31:05.658 Number of I/O Completion Queues: 128 00:31:05.658 00:31:05.658 ZNS Specific Controller Data 00:31:05.658 ============================ 00:31:05.658 Zone Append Size Limit: 0 00:31:05.658 00:31:05.658 00:31:05.658 Active Namespaces 00:31:05.658 ================= 00:31:05.658 get_feature(0x05) failed 00:31:05.658 Namespace ID:1 00:31:05.658 Command Set Identifier: NVM (00h) 00:31:05.658 Deallocate: Supported 00:31:05.658 Deallocated/Unwritten Error: Not Supported 00:31:05.658 Deallocated Read Value: Unknown 00:31:05.658 Deallocate in Write Zeroes: Not Supported 00:31:05.658 Deallocated Guard Field: 0xFFFF 00:31:05.658 Flush: Supported 00:31:05.658 Reservation: Not Supported 00:31:05.658 Namespace Sharing Capabilities: Multiple Controllers 00:31:05.658 Size (in LBAs): 1953525168 (931GiB) 00:31:05.658 Capacity (in LBAs): 1953525168 (931GiB) 00:31:05.658 Utilization (in LBAs): 1953525168 (931GiB) 00:31:05.658 UUID: 0246a4fb-9dcd-4f52-b305-2fb4727ce6dd 00:31:05.658 Thin Provisioning: Not Supported 00:31:05.658 Per-NS Atomic Units: Yes 00:31:05.658 Atomic Boundary Size (Normal): 0 00:31:05.658 Atomic Boundary Size (PFail): 0 00:31:05.658 Atomic Boundary Offset: 0 00:31:05.658 NGUID/EUI64 Never Reused: No 00:31:05.658 ANA group ID: 1 00:31:05.658 Namespace Write Protected: No 00:31:05.658 Number of LBA Formats: 1 00:31:05.658 Current LBA Format: LBA Format #00 00:31:05.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:05.658 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.658 rmmod nvme_tcp 00:31:05.658 rmmod nvme_fabrics 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.658 12:26:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:07.564 12:26:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:08.938 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:08.938 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:08.938 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:08.938 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:08.938 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:08.939 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:08.939 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:08.939 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:08.939 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:08.939 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:09.874 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:09.874 00:31:09.874 real 0m9.364s 00:31:09.874 user 0m1.930s 00:31:09.874 sys 0m3.411s 00:31:09.874 12:26:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.874 12:26:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.874 ************************************ 00:31:09.874 END TEST nvmf_identify_kernel_target 00:31:09.874 ************************************ 00:31:09.874 12:26:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:09.874 12:26:17 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:09.874 12:26:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:09.874 12:26:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.874 12:26:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.874 ************************************ 00:31:09.874 START TEST nvmf_auth_host 00:31:09.874 ************************************ 00:31:09.874 12:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:10.132 * Looking for test storage... 00:31:10.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.132 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.132 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:10.132 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.132 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.132 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.132 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:10.133 12:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:12.034 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:12.034 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:12.034 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.034 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:12.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:12.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:31:12.035 00:31:12.035 --- 10.0.0.2 ping statistics --- 00:31:12.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.035 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:31:12.035 00:31:12.035 --- 10.0.0.1 ping statistics --- 00:31:12.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.035 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1126514 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1126514 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1126514 ']' 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:12.035 12:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c02f66da023117d857735ecaccd74cd5 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.B7Y 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c02f66da023117d857735ecaccd74cd5 0 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c02f66da023117d857735ecaccd74cd5 0 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c02f66da023117d857735ecaccd74cd5 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:12.293 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.B7Y 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.B7Y 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.B7Y 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4db5b83b300dbb4040c63b5953485932d112afe001ac65daccbeae7a95f26ef0 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VtG 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4db5b83b300dbb4040c63b5953485932d112afe001ac65daccbeae7a95f26ef0 3 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4db5b83b300dbb4040c63b5953485932d112afe001ac65daccbeae7a95f26ef0 3 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4db5b83b300dbb4040c63b5953485932d112afe001ac65daccbeae7a95f26ef0 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VtG 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VtG 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.VtG 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=91916f52d8dbd13a7aa3cb5e906daf7123da25ff2a644603 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.WPs 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 91916f52d8dbd13a7aa3cb5e906daf7123da25ff2a644603 0 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 91916f52d8dbd13a7aa3cb5e906daf7123da25ff2a644603 0 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=91916f52d8dbd13a7aa3cb5e906daf7123da25ff2a644603 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.WPs 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.WPs 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WPs 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ea9f34dc862d4eea66bb7aca080c15d0caa2d0e4122483f2 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zSB 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ea9f34dc862d4eea66bb7aca080c15d0caa2d0e4122483f2 2 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ea9f34dc862d4eea66bb7aca080c15d0caa2d0e4122483f2 2 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ea9f34dc862d4eea66bb7aca080c15d0caa2d0e4122483f2 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zSB 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zSB 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zSB 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:12.552 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7ac23c5188dad2e3c65716ed2cba0bb9 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eTk 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7ac23c5188dad2e3c65716ed2cba0bb9 1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7ac23c5188dad2e3c65716ed2cba0bb9 1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7ac23c5188dad2e3c65716ed2cba0bb9 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eTk 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eTk 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.eTk 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=106522f49f125f1c7d7f08763e97bc5f 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cRE 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 106522f49f125f1c7d7f08763e97bc5f 1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 106522f49f125f1c7d7f08763e97bc5f 1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=106522f49f125f1c7d7f08763e97bc5f 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:12.553 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.811 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cRE 00:31:12.811 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cRE 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.cRE 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=494a16e9e24358cf8e4c4ba4465ac1b37dcdbe87eee843a0 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uqV 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 494a16e9e24358cf8e4c4ba4465ac1b37dcdbe87eee843a0 2 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 494a16e9e24358cf8e4c4ba4465ac1b37dcdbe87eee843a0 2 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=494a16e9e24358cf8e4c4ba4465ac1b37dcdbe87eee843a0 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uqV 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uqV 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.uqV 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e7137c51efe566df693d701f3c67cf44 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NAy 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e7137c51efe566df693d701f3c67cf44 0 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e7137c51efe566df693d701f3c67cf44 0 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e7137c51efe566df693d701f3c67cf44 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NAy 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NAy 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.NAy 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d2608789af73c5a5fe7a3c9aa23a2763ab01a9c8346c61998382cd2164dc8e39 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Q6C 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d2608789af73c5a5fe7a3c9aa23a2763ab01a9c8346c61998382cd2164dc8e39 3 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d2608789af73c5a5fe7a3c9aa23a2763ab01a9c8346c61998382cd2164dc8e39 3 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d2608789af73c5a5fe7a3c9aa23a2763ab01a9c8346c61998382cd2164dc8e39 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Q6C 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Q6C 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Q6C 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1126514 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1126514 ']' 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:12.812 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.B7Y 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.VtG ]] 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VtG 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WPs 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zSB ]] 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zSB 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.071 12:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eTk 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.cRE ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cRE 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uqV 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.NAy ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.NAy 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Q6C 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:13.331 12:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:14.267 Waiting for block devices as requested 00:31:14.525 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:14.525 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:14.784 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:14.784 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:14.784 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:15.070 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:15.070 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:15.070 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:15.070 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:15.338 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:15.338 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:15.338 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:15.338 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:15.338 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:15.595 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:15.595 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:15.595 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:16.175 No valid GPT data, bailing 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:16.175 00:31:16.175 Discovery Log Number of Records 2, Generation counter 2 00:31:16.175 =====Discovery Log Entry 0====== 00:31:16.175 trtype: tcp 00:31:16.175 adrfam: ipv4 00:31:16.175 subtype: current discovery subsystem 00:31:16.175 treq: not specified, sq flow control disable supported 00:31:16.175 portid: 1 00:31:16.175 trsvcid: 4420 00:31:16.175 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:16.175 traddr: 10.0.0.1 00:31:16.175 eflags: none 00:31:16.175 sectype: none 00:31:16.175 =====Discovery Log Entry 1====== 00:31:16.175 trtype: tcp 00:31:16.175 adrfam: ipv4 00:31:16.175 subtype: nvme subsystem 00:31:16.175 treq: not specified, sq flow control disable supported 00:31:16.175 portid: 1 00:31:16.175 trsvcid: 4420 00:31:16.175 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:16.175 traddr: 10.0.0.1 00:31:16.175 eflags: none 00:31:16.175 sectype: none 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.175 12:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.175 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.176 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.435 nvme0n1 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.435 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.693 nvme0n1 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.693 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.694 nvme0n1 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.694 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:16.951 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.952 nvme0n1 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.952 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.211 12:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.211 nvme0n1 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.211 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.470 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.471 nvme0n1 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.471 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 nvme0n1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.990 nvme0n1 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.990 12:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.302 nvme0n1 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:18.302 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.303 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.562 nvme0n1 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.562 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.820 nvme0n1 00:31:18.820 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.820 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.820 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.821 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.078 nvme0n1 00:31:19.078 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.078 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.078 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.078 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.078 12:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.078 12:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:19.335 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.336 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.594 nvme0n1 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.594 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.852 nvme0n1 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.852 12:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.419 nvme0n1 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.419 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.420 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.680 nvme0n1 00:31:20.680 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.680 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.681 12:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.249 nvme0n1 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:21.249 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.250 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.816 nvme0n1 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.816 12:26:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.384 nvme0n1 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.384 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.650 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.237 nvme0n1 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.237 12:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.805 nvme0n1 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.805 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.806 12:26:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.745 nvme0n1 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.745 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.746 12:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.687 nvme0n1 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.687 12:26:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.620 nvme0n1 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:26.620 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.621 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.879 12:26:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.812 nvme0n1 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.812 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.813 12:26:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.744 nvme0n1 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.744 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.002 nvme0n1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.002 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 nvme0n1 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.260 12:26:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 nvme0n1 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 nvme0n1 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.518 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.829 nvme0n1 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.829 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.830 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.114 nvme0n1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.114 12:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.370 nvme0n1 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.370 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.627 nvme0n1 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.627 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.884 nvme0n1 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.884 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.141 nvme0n1 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.141 12:26:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.141 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.398 nvme0n1 00:31:31.398 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.398 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.398 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.398 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.398 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.398 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.655 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.913 nvme0n1 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.913 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.170 nvme0n1 00:31:32.170 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.170 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.170 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.170 12:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.170 12:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:32.170 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.171 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.428 nvme0n1 00:31:32.428 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.428 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.428 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.428 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.428 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.428 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.685 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.943 nvme0n1 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.943 12:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.508 nvme0n1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.508 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.072 nvme0n1 00:31:34.072 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.072 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.072 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.072 12:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.072 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.072 12:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.329 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.894 nvme0n1 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.894 12:26:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.459 nvme0n1 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.459 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.024 nvme0n1 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.024 12:26:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.956 nvme0n1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.956 12:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.328 nvme0n1 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.328 12:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.892 nvme0n1 00:31:38.892 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.892 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.892 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.892 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.892 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.892 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.150 12:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.083 nvme0n1 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.083 12:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.024 nvme0n1 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.024 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.025 12:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.284 nvme0n1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.284 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.542 nvme0n1 00:31:41.542 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.542 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.542 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.542 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.542 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.542 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.543 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.801 nvme0n1 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.801 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.060 nvme0n1 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.060 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.319 nvme0n1 00:31:42.319 12:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.319 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 nvme0n1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 nvme0n1 00:31:42.591 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:42.849 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.850 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.107 nvme0n1 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.107 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.108 12:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.108 12:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.108 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.108 12:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.108 nvme0n1 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.365 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.622 nvme0n1 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.622 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.879 nvme0n1 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.879 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.880 12:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.137 nvme0n1 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.137 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.394 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.395 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.395 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.395 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.681 nvme0n1 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.681 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.938 nvme0n1 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.938 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.939 12:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.203 nvme0n1 00:31:45.203 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.203 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.203 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.203 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.203 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.203 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.464 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.026 nvme0n1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.026 12:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 nvme0n1 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.591 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.157 nvme0n1 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.157 12:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.157 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.723 nvme0n1 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.723 12:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.288 nvme0n1 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.288 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAyZjY2ZGEwMjMxMTdkODU3NzM1ZWNhY2NkNzRjZDWjRiqC: 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: ]] 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiNWI4M2IzMDBkYmI0MDQwYzYzYjU5NTM0ODU5MzJkMTEyYWZlMDAxYWM2NWRhY2NiZWFlN2E5NWYyNmVmMK1qT34=: 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.546 12:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.479 nvme0n1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.480 12:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.413 nvme0n1 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FjMjNjNTE4OGRhZDJlM2M2NTcxNmVkMmNiYTBiYjnUfsyK: 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA2NTIyZjQ5ZjEyNWYxYzdkN2YwODc2M2U5N2JjNWZnbplL: 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.413 12:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.346 nvme0n1 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDk0YTE2ZTllMjQzNThjZjhlNGM0YmE0NDY1YWMxYjM3ZGNkYmU4N2VlZTg0M2EwNhtOXQ==: 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcxMzdjNTFlZmU1NjZkZjY5M2Q3MDFmM2M2N2NmNDQYHEZs: 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.347 12:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.279 nvme0n1 00:31:52.279 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.279 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDI2MDg3ODlhZjczYzVhNWZlN2EzYzlhYTIzYTI3NjNhYjAxYTljODM0NmM2MTk5ODM4MmNkMjE2NGRjOGUzObZpKaQ=: 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.280 12:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.210 nvme0n1 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE5MTZmNTJkOGRiZDEzYTdhYTNjYjVlOTA2ZGFmNzEyM2RhMjVmZjJhNjQ0NjAz35yoSA==: 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: ]] 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE5ZjM0ZGM4NjJkNGVlYTY2YmI3YWNhMDgwYzE1ZDBjYWEyZDBlNDEyMjQ4M2Yy+NE1/Q==: 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.210 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.467 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.468 request: 00:31:53.468 { 00:31:53.468 "name": "nvme0", 00:31:53.468 "trtype": "tcp", 00:31:53.468 "traddr": "10.0.0.1", 00:31:53.468 "adrfam": "ipv4", 00:31:53.468 "trsvcid": "4420", 00:31:53.468 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:53.468 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:53.468 "prchk_reftag": false, 00:31:53.468 "prchk_guard": false, 00:31:53.468 "hdgst": false, 00:31:53.468 "ddgst": false, 00:31:53.468 "method": "bdev_nvme_attach_controller", 00:31:53.468 "req_id": 1 00:31:53.468 } 00:31:53.468 Got JSON-RPC error response 00:31:53.468 response: 00:31:53.468 { 00:31:53.468 "code": -5, 00:31:53.468 "message": "Input/output error" 00:31:53.468 } 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.468 request: 00:31:53.468 { 00:31:53.468 "name": "nvme0", 00:31:53.468 "trtype": "tcp", 00:31:53.468 "traddr": "10.0.0.1", 00:31:53.468 "adrfam": "ipv4", 00:31:53.468 "trsvcid": "4420", 00:31:53.468 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:53.468 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:53.468 "prchk_reftag": false, 00:31:53.468 "prchk_guard": false, 00:31:53.468 "hdgst": false, 00:31:53.468 "ddgst": false, 00:31:53.468 "dhchap_key": "key2", 00:31:53.468 "method": "bdev_nvme_attach_controller", 00:31:53.468 "req_id": 1 00:31:53.468 } 00:31:53.468 Got JSON-RPC error response 00:31:53.468 response: 00:31:53.468 { 00:31:53.468 "code": -5, 00:31:53.468 "message": "Input/output error" 00:31:53.468 } 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.468 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.726 request: 00:31:53.726 { 00:31:53.726 "name": "nvme0", 00:31:53.726 "trtype": "tcp", 00:31:53.726 "traddr": "10.0.0.1", 00:31:53.726 "adrfam": "ipv4", 00:31:53.726 "trsvcid": "4420", 00:31:53.726 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:53.726 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:53.726 "prchk_reftag": false, 00:31:53.726 "prchk_guard": false, 00:31:53.726 "hdgst": false, 00:31:53.726 "ddgst": false, 00:31:53.726 "dhchap_key": "key1", 00:31:53.726 "dhchap_ctrlr_key": "ckey2", 00:31:53.726 "method": "bdev_nvme_attach_controller", 00:31:53.726 "req_id": 1 00:31:53.726 } 00:31:53.726 Got JSON-RPC error response 00:31:53.726 response: 00:31:53.726 { 00:31:53.726 "code": -5, 00:31:53.726 "message": "Input/output error" 00:31:53.726 } 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:53.726 rmmod nvme_tcp 00:31:53.726 rmmod nvme_fabrics 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1126514 ']' 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1126514 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1126514 ']' 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1126514 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1126514 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1126514' 00:31:53.726 killing process with pid 1126514 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1126514 00:31:53.726 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1126514 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.984 12:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:55.885 12:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:57.258 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:57.258 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:57.258 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:57.517 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:58.085 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:58.344 12:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.B7Y /tmp/spdk.key-null.WPs /tmp/spdk.key-sha256.eTk /tmp/spdk.key-sha384.uqV /tmp/spdk.key-sha512.Q6C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:58.344 12:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:59.280 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:59.280 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:59.280 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:59.280 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:59.280 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:59.280 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:59.280 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:59.280 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:59.280 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:59.280 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:59.280 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:59.280 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:59.280 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:59.280 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:59.280 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:59.538 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:59.538 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:59.538 00:31:59.538 real 0m49.593s 00:31:59.538 user 0m47.697s 00:31:59.538 sys 0m5.603s 00:31:59.538 12:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:59.538 12:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.538 ************************************ 00:31:59.538 END TEST nvmf_auth_host 00:31:59.538 ************************************ 00:31:59.538 12:27:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:59.538 12:27:07 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:31:59.538 12:27:07 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:59.538 12:27:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:59.538 12:27:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.538 12:27:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.538 ************************************ 00:31:59.538 START TEST nvmf_digest 00:31:59.538 ************************************ 00:31:59.538 12:27:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:59.538 * Looking for test storage... 00:31:59.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:59.797 12:27:07 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:31:59.798 12:27:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:01.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:01.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:01.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:01.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:01.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:32:01.778 00:32:01.778 --- 10.0.0.2 ping statistics --- 00:32:01.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.778 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:32:01.778 00:32:01.778 --- 10.0.0.1 ping statistics --- 00:32:01.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.778 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:01.778 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.779 ************************************ 00:32:01.779 START TEST nvmf_digest_clean 00:32:01.779 ************************************ 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1136589 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1136589 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1136589 ']' 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:01.779 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:01.779 [2024-07-22 12:27:09.492370] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:01.779 [2024-07-22 12:27:09.492455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.779 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.779 [2024-07-22 12:27:09.531374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:01.779 [2024-07-22 12:27:09.561676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.779 [2024-07-22 12:27:09.653102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.779 [2024-07-22 12:27:09.653162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.779 [2024-07-22 12:27:09.653179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.779 [2024-07-22 12:27:09.653192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.779 [2024-07-22 12:27:09.653204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.779 [2024-07-22 12:27:09.653232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.037 null0 00:32:02.037 [2024-07-22 12:27:09.850445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.037 [2024-07-22 12:27:09.874693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1136711 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1136711 /var/tmp/bperf.sock 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1136711 ']' 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:02.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:02.037 12:27:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.037 [2024-07-22 12:27:09.923894] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:02.037 [2024-07-22 12:27:09.923969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136711 ] 00:32:02.037 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.037 [2024-07-22 12:27:09.955402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:02.308 [2024-07-22 12:27:09.987340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.308 [2024-07-22 12:27:10.088557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.308 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:02.308 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:02.308 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:02.308 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:02.308 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:02.565 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.565 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:03.128 nvme0n1 00:32:03.128 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:03.128 12:27:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:03.385 Running I/O for 2 seconds... 00:32:05.280 00:32:05.280 Latency(us) 00:32:05.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:05.280 nvme0n1 : 2.00 19101.58 74.62 0.00 0.00 6691.42 3301.07 13689.74 00:32:05.280 =================================================================================================================== 00:32:05.280 Total : 19101.58 74.62 0.00 0.00 6691.42 3301.07 13689.74 00:32:05.280 0 00:32:05.280 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:05.280 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:05.280 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:05.280 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:05.280 | select(.opcode=="crc32c") 00:32:05.280 | "\(.module_name) \(.executed)"' 00:32:05.280 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1136711 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1136711 ']' 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1136711 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1136711 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1136711' 00:32:05.537 killing process with pid 1136711 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1136711 00:32:05.537 Received shutdown signal, test time was about 2.000000 seconds 00:32:05.537 00:32:05.537 Latency(us) 00:32:05.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.537 =================================================================================================================== 00:32:05.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.537 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1136711 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1137113 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1137113 /var/tmp/bperf.sock 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1137113 ']' 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:05.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:05.794 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:05.794 [2024-07-22 12:27:13.648687] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:05.794 [2024-07-22 12:27:13.648777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137113 ] 00:32:05.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:05.794 Zero copy mechanism will not be used. 00:32:05.795 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.795 [2024-07-22 12:27:13.681366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:05.795 [2024-07-22 12:27:13.709115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.052 [2024-07-22 12:27:13.798552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.052 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:06.052 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:06.052 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:06.052 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:06.052 12:27:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:06.310 12:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.310 12:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.875 nvme0n1 00:32:06.875 12:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:06.875 12:27:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:06.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:06.875 Zero copy mechanism will not be used. 00:32:06.875 Running I/O for 2 seconds... 00:32:08.772 00:32:08.772 Latency(us) 00:32:08.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.772 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:08.772 nvme0n1 : 2.00 3547.89 443.49 0.00 0.00 4504.91 1426.01 5971.06 00:32:08.772 =================================================================================================================== 00:32:08.772 Total : 3547.89 443.49 0.00 0.00 4504.91 1426.01 5971.06 00:32:08.772 0 00:32:08.772 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:08.772 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:08.772 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:08.772 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:08.772 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:08.772 | select(.opcode=="crc32c") 00:32:08.772 | "\(.module_name) \(.executed)"' 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1137113 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1137113 ']' 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1137113 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137113 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137113' 00:32:09.044 killing process with pid 1137113 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1137113 00:32:09.044 Received shutdown signal, test time was about 2.000000 seconds 00:32:09.044 00:32:09.044 Latency(us) 00:32:09.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.044 =================================================================================================================== 00:32:09.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.044 12:27:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1137113 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1137525 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1137525 /var/tmp/bperf.sock 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1137525 ']' 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:09.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:09.302 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:09.302 [2024-07-22 12:27:17.210176] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:09.302 [2024-07-22 12:27:17.210255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137525 ] 00:32:09.560 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.560 [2024-07-22 12:27:17.240856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:09.560 [2024-07-22 12:27:17.268287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.560 [2024-07-22 12:27:17.357789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.560 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:09.560 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:09.560 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:09.560 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:09.560 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:10.125 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.125 12:27:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.383 nvme0n1 00:32:10.383 12:27:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:10.383 12:27:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.383 Running I/O for 2 seconds... 00:32:12.911 00:32:12.911 Latency(us) 00:32:12.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.911 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.911 nvme0n1 : 2.01 20996.86 82.02 0.00 0.00 6085.41 2475.80 12718.84 00:32:12.911 =================================================================================================================== 00:32:12.911 Total : 20996.86 82.02 0.00 0.00 6085.41 2475.80 12718.84 00:32:12.911 0 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:12.911 | select(.opcode=="crc32c") 00:32:12.911 | "\(.module_name) \(.executed)"' 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1137525 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1137525 ']' 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1137525 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137525 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137525' 00:32:12.911 killing process with pid 1137525 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1137525 00:32:12.911 Received shutdown signal, test time was about 2.000000 seconds 00:32:12.911 00:32:12.911 Latency(us) 00:32:12.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.911 =================================================================================================================== 00:32:12.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.911 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1137525 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1137969 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1137969 /var/tmp/bperf.sock 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1137969 ']' 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:13.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:13.169 12:27:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:13.169 [2024-07-22 12:27:20.908016] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:13.169 [2024-07-22 12:27:20.908108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137969 ] 00:32:13.169 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:13.169 Zero copy mechanism will not be used. 00:32:13.169 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.169 [2024-07-22 12:27:20.946845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:13.169 [2024-07-22 12:27:20.979763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.169 [2024-07-22 12:27:21.079184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.427 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:13.427 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:13.427 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:13.427 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:13.427 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:13.685 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.685 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.943 nvme0n1 00:32:13.943 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:13.943 12:27:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:14.207 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:14.207 Zero copy mechanism will not be used. 00:32:14.207 Running I/O for 2 seconds... 00:32:16.102 00:32:16.102 Latency(us) 00:32:16.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.102 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:16.102 nvme0n1 : 2.01 2897.15 362.14 0.00 0.00 5509.55 3810.80 10194.49 00:32:16.102 =================================================================================================================== 00:32:16.102 Total : 2897.15 362.14 0.00 0.00 5509.55 3810.80 10194.49 00:32:16.102 0 00:32:16.102 12:27:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:16.102 12:27:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:16.102 12:27:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:16.102 12:27:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:16.102 12:27:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:16.102 | select(.opcode=="crc32c") 00:32:16.102 | "\(.module_name) \(.executed)"' 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1137969 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1137969 ']' 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1137969 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137969 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137969' 00:32:16.360 killing process with pid 1137969 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1137969 00:32:16.360 Received shutdown signal, test time was about 2.000000 seconds 00:32:16.360 00:32:16.360 Latency(us) 00:32:16.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.360 =================================================================================================================== 00:32:16.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.360 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1137969 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1136589 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1136589 ']' 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1136589 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1136589 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1136589' 00:32:16.617 killing process with pid 1136589 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1136589 00:32:16.617 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1136589 00:32:16.875 00:32:16.875 real 0m15.268s 00:32:16.875 user 0m30.323s 00:32:16.875 sys 0m4.127s 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 ************************************ 00:32:16.875 END TEST nvmf_digest_clean 00:32:16.875 ************************************ 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 ************************************ 00:32:16.875 START TEST nvmf_digest_error 00:32:16.875 ************************************ 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1138482 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1138482 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1138482 ']' 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.875 12:27:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:17.132 [2024-07-22 12:27:24.812946] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:17.132 [2024-07-22 12:27:24.813027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.132 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.132 [2024-07-22 12:27:24.850197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:17.132 [2024-07-22 12:27:24.876169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.132 [2024-07-22 12:27:24.959403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.132 [2024-07-22 12:27:24.959457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.132 [2024-07-22 12:27:24.959485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.132 [2024-07-22 12:27:24.959497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.132 [2024-07-22 12:27:24.959506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.132 [2024-07-22 12:27:24.959533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:17.132 [2024-07-22 12:27:25.040127] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.132 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:17.390 null0 00:32:17.390 [2024-07-22 12:27:25.149446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.390 [2024-07-22 12:27:25.173676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1138512 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1138512 /var/tmp/bperf.sock 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1138512 ']' 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.390 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:17.390 [2024-07-22 12:27:25.223509] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:17.390 [2024-07-22 12:27:25.223582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138512 ] 00:32:17.390 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.390 [2024-07-22 12:27:25.260358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:17.390 [2024-07-22 12:27:25.289665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.647 [2024-07-22 12:27:25.385980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.647 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:17.647 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:17.647 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:17.647 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:17.904 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:17.905 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.905 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:17.905 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.905 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.905 12:27:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.502 nvme0n1 00:32:18.502 12:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:18.502 12:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.502 12:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:18.502 12:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.502 12:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:18.502 12:27:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:18.502 Running I/O for 2 seconds... 00:32:18.502 [2024-07-22 12:27:26.292509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.292564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.292591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.308419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.308456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.308476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.321398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.321449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.321469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.333978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.334014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.334039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.349789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.349819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.349838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.364668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.364699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.364717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.377137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.377172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.377192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.392260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.392295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.392315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.406648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.406699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.406716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.502 [2024-07-22 12:27:26.419881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.502 [2024-07-22 12:27:26.419909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.502 [2024-07-22 12:27:26.419942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-22 12:27:26.434437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.759 [2024-07-22 12:27:26.434470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-22 12:27:26.434488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-22 12:27:26.447630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.759 [2024-07-22 12:27:26.447664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-22 12:27:26.447698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.464082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.464117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.464136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.476569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.476605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.476635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.490994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.491029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.491048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.506389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.506423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.506442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.520282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.520317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.520336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.538479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.538514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.538534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.555408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.555445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.555465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.567835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.567864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.567880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.583923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.583972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.583991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.601022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.601056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.601076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.613862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.613891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.629669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.629725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.629741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.644524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.644559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.644578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.656699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.656728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.656752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.672881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.672909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.672925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.760 [2024-07-22 12:27:26.685278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:18.760 [2024-07-22 12:27:26.685311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-22 12:27:26.685331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.018 [2024-07-22 12:27:26.702656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.702702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.702718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.718853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.718882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.718897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.730957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.730991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.731010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.747481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.747517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.747536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.763389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.763423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.763442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.776486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.776519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.776538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.790377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.790416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.790435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.804985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.805020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.805039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.816676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.816707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.816739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.833235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.833270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.833289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.847113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.847148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.847166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.859569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.859603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.859630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.876077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.876112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.876130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.891785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.891816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.891834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.904068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.904102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.904133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.918469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.918503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.918522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.931576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.931610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.931651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-22 12:27:26.944594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.019 [2024-07-22 12:27:26.944636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-22 12:27:26.944671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.277 [2024-07-22 12:27:26.959583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.277 [2024-07-22 12:27:26.959625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.277 [2024-07-22 12:27:26.959659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:26.975139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:26.975179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:26.975199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:26.987416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:26.987450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:26.987468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.005191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.005226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.005245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.021334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.021369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.021389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.034787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.034841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.034858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.051274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.051321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.051341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.063373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.063407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.063426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.079851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.079884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.079904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.098029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.098063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.098082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.115327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.115361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.115380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.131837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.131868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.131886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.148538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.148573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.148591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.160219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.160253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.173886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.173933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.173953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.189725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.189757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.189774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.278 [2024-07-22 12:27:27.201563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.278 [2024-07-22 12:27:27.201597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.278 [2024-07-22 12:27:27.201625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.216424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.216478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.228203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.228255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.242579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.242621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.242656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.256288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.256324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.256343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.270960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.271012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.282251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.282285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.282316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.298384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.298420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.298439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.314308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.314342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.314361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.326061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.326095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.326115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.342417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.342451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.342470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.358692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.358724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.358742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.371169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.371204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.371223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.387042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.387094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.398126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.398160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.398179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.412772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.412802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.412819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.427594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.427636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.427667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.439722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.439751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.439773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.537 [2024-07-22 12:27:27.456066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.537 [2024-07-22 12:27:27.456099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.537 [2024-07-22 12:27:27.456118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.796 [2024-07-22 12:27:27.471364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.796 [2024-07-22 12:27:27.471398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.796 [2024-07-22 12:27:27.471417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.796 [2024-07-22 12:27:27.483944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.796 [2024-07-22 12:27:27.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.796 [2024-07-22 12:27:27.484009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.796 [2024-07-22 12:27:27.500555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.796 [2024-07-22 12:27:27.500589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.796 [2024-07-22 12:27:27.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.796 [2024-07-22 12:27:27.517790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.796 [2024-07-22 12:27:27.517820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.796 [2024-07-22 12:27:27.517837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.532007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.532041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.532076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.543834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.543864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.543881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.559004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.559039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.559058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.574223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.574257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.574276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.587980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.588017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.588037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.600971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.601005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.601027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.617821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.617850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.617866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.634462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.634497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.634520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.650214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.650268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.662711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.662752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.662768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.679318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.679353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.679373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.691007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.691042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.691060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.705915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.705949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.705968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.797 [2024-07-22 12:27:27.721853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:19.797 [2024-07-22 12:27:27.721892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.797 [2024-07-22 12:27:27.721911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.733222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.733257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.733276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.749459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.749494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.749513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.762734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.762762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.762778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.779087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.779121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.779140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.794398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.794433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.794452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.805990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.806023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.806060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.823380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.823413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.823433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.838852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.838884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.838919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.850486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.850520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.850538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.867524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.867560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.867579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.884047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.884080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.884101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.895811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.895840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.895857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.913805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.913834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.913861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.929774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.929805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.929822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.942271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.942305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.942324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.957175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.957211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.957231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.971941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.971984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.972004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.056 [2024-07-22 12:27:27.985198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.056 [2024-07-22 12:27:27.985232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.056 [2024-07-22 12:27:27.985251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.315 [2024-07-22 12:27:28.000937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.315 [2024-07-22 12:27:28.000985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.315 [2024-07-22 12:27:28.001004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.315 [2024-07-22 12:27:28.013841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.315 [2024-07-22 12:27:28.013871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.315 [2024-07-22 12:27:28.013888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.315 [2024-07-22 12:27:28.028180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.315 [2024-07-22 12:27:28.028214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.315 [2024-07-22 12:27:28.028239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.043933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.043967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.043986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.055729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.055758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.055773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.071611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.071653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.071687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.084363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.084396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.084416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.100291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.100325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.100344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.113871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.113901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.113917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.127708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.127738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.127755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.140989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.141023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.141042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.153766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.153797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.153823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.167254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.167288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.167307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.183113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.183150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.183170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.196374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.196407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.196426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.210276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.210311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.210330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.225351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.225386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.225411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.316 [2024-07-22 12:27:28.238445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.316 [2024-07-22 12:27:28.238479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.316 [2024-07-22 12:27:28.238497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.574 [2024-07-22 12:27:28.253755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.574 [2024-07-22 12:27:28.253784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.574 [2024-07-22 12:27:28.253801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.574 [2024-07-22 12:27:28.267283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.574 [2024-07-22 12:27:28.267317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.574 [2024-07-22 12:27:28.267336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.574 [2024-07-22 12:27:28.279731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1188110) 00:32:20.574 [2024-07-22 12:27:28.279763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.574 [2024-07-22 12:27:28.279779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.574 00:32:20.574 Latency(us) 00:32:20.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:20.574 nvme0n1 : 2.01 17549.67 68.55 0.00 0.00 7282.12 3859.34 24855.13 00:32:20.574 =================================================================================================================== 00:32:20.574 Total : 17549.67 68.55 0.00 0.00 7282.12 3859.34 24855.13 00:32:20.574 0 00:32:20.574 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:20.574 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:20.574 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:20.574 | .driver_specific 00:32:20.574 | .nvme_error 00:32:20.574 | .status_code 00:32:20.574 | .command_transient_transport_error' 00:32:20.574 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1138512 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1138512 ']' 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1138512 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1138512 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1138512' 00:32:20.833 killing process with pid 1138512 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1138512 00:32:20.833 Received shutdown signal, test time was about 2.000000 seconds 00:32:20.833 00:32:20.833 Latency(us) 00:32:20.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.833 =================================================================================================================== 00:32:20.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:20.833 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1138512 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1138925 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1138925 /var/tmp/bperf.sock 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1138925 ']' 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:21.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:21.092 12:27:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.092 [2024-07-22 12:27:28.855225] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:21.092 [2024-07-22 12:27:28.855323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138925 ] 00:32:21.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:21.092 Zero copy mechanism will not be used. 00:32:21.092 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.092 [2024-07-22 12:27:28.888504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.092 [2024-07-22 12:27:28.921380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.092 [2024-07-22 12:27:29.011733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.350 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:21.350 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:21.350 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:21.350 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:21.608 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:21.608 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.608 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.608 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.608 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.608 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.865 nvme0n1 00:32:21.865 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:21.865 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.866 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.866 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.866 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:21.866 12:27:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:22.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:22.124 Zero copy mechanism will not be used. 00:32:22.124 Running I/O for 2 seconds... 00:32:22.124 [2024-07-22 12:27:29.848162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.124 [2024-07-22 12:27:29.848216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.124 [2024-07-22 12:27:29.848237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.124 [2024-07-22 12:27:29.859393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.124 [2024-07-22 12:27:29.859430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.124 [2024-07-22 12:27:29.859451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.124 [2024-07-22 12:27:29.869413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.124 [2024-07-22 12:27:29.869449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.124 [2024-07-22 12:27:29.869468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.124 [2024-07-22 12:27:29.880248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.124 [2024-07-22 12:27:29.880283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.124 [2024-07-22 12:27:29.880302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.891179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.891215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.891234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.902610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.902668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.902685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.913149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.913185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.923345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.923381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.923400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.933797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.933837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.933854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.944157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.944193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.944226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.954990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.955026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.955046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.965890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.965946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.965965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.976103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.976138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.976158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.987721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.987752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.987768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:29.998211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:29.998247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:29.998266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:30.008417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:30.008468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:30.008488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:30.018843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:30.018885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:30.018903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:30.029294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:30.029337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:30.029356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:30.039085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:30.039128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:30.039147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.125 [2024-07-22 12:27:30.049224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.125 [2024-07-22 12:27:30.049264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.125 [2024-07-22 12:27:30.049283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.383 [2024-07-22 12:27:30.058343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.383 [2024-07-22 12:27:30.058378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.383 [2024-07-22 12:27:30.058397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.383 [2024-07-22 12:27:30.067350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.383 [2024-07-22 12:27:30.067383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.067402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.076579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.076623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.076644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.085735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.085778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.085794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.094808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.094836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.094851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.103803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.103832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.103870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.113280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.113314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.113332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.122648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.122695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.122712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.132439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.132473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.132492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.141786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.141817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.141834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.151140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.151173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.160153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.160186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.160204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.169067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.169101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.169119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.178041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.178073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.178092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.187043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.187081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.187100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.196080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.196113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.196131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.204992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.205023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.205041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.213992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.214025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.214043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.223251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.223284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.223302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.233135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.233169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.233188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.242316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.242350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.242369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.251785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.251815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.251831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.261377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.261411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.261430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.270272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.270306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.270324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.279482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.279515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.279533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.288491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.288523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.288542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.297426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.297458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.297476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.384 [2024-07-22 12:27:30.306398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.384 [2024-07-22 12:27:30.306432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.384 [2024-07-22 12:27:30.306450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.315853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.315898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.315914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.325442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.325477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.325496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.334919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.334953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.334971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.344272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.344331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.353665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.353695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.353726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.362988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.363022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.363041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.372268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.372301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.372320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.381415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.381465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.390382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.390414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.390432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.399417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.399449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.399467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.408473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.408506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.408525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.418563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.418598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.418625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.427965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.427999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.428018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.436943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.436976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.436994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.446732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.446777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.446794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.455880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.455911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.455942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.464821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.464864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.464880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.473796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.473825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.473841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.482823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.482867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.482884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.491949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.643 [2024-07-22 12:27:30.491997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.643 [2024-07-22 12:27:30.492015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.643 [2024-07-22 12:27:30.501064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.501096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.501120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.510440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.510473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.510491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.519737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.519782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.519798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.529325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.529359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.529378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.538658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.538704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.538720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.548749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.548781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.548798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.558158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.558192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.558211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.644 [2024-07-22 12:27:30.567903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.644 [2024-07-22 12:27:30.567949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.644 [2024-07-22 12:27:30.567969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.902 [2024-07-22 12:27:30.577739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.902 [2024-07-22 12:27:30.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.902 [2024-07-22 12:27:30.577788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.902 [2024-07-22 12:27:30.587037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.902 [2024-07-22 12:27:30.587077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.902 [2024-07-22 12:27:30.587097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.902 [2024-07-22 12:27:30.596774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.596818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.596834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.606027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.606061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.606080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.615639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.615687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.615703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.625098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.625132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.625150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.634536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.634570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.643745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.643775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.643791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.653200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.653234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.653253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.662396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.662429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.662448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.671867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.671897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.681752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.681802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.681820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.690436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.690469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.690487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.699951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.699997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.700017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.709487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.709521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.709540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.719230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.719265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.719283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.729128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.729164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.738635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.738682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.748330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.748364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.748390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.757495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.757529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.757547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.766703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.766747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.766764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.776050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.776084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.776103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.785702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.785733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.785749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.795573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.795608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.795653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.805060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.805094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.805113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.814691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.814720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.814751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.903 [2024-07-22 12:27:30.824444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:22.903 [2024-07-22 12:27:30.824478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.903 [2024-07-22 12:27:30.824497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.161 [2024-07-22 12:27:30.833720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.161 [2024-07-22 12:27:30.833755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.161 [2024-07-22 12:27:30.833774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.161 [2024-07-22 12:27:30.843369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.843403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.843421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.852483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.852516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.852534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.862180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.862215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.862233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.871782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.871813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.871829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.881453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.881488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.881507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.891303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.891356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.901209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.901243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.901262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.910956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.911000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.911016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.920453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.920486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.920504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.930038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.930073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.930091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.939411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.939444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.939463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.948929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.948976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.948995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.958172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.958204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.958222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.967235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.967268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.967286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.976291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.976324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.976342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.985315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.985347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.985365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:30.994321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:30.994353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:30.994378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.003426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.003458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.003476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.013104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.013138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.013156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.023069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.023103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.023122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.032719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.032751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.032768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.042065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.042101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.042120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.051108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.051143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.051162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.060135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.060169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.060188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.069231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.069273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.069291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.078285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.078318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.078336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.162 [2024-07-22 12:27:31.087265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.162 [2024-07-22 12:27:31.087297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.162 [2024-07-22 12:27:31.087315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.096183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.096217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.096235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.105541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.105575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.105594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.115693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.115724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.115741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.124931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.124962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.124979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.134239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.134274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.134292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.143365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.143399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.143417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.152267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.152300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.152324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.161277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.161310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.161329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.170389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.170441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.179322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.179355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.179373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.188222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.188256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.188273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.197290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.197322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.197340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.206759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.206804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.206820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.215983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.216016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.216034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.225380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.225413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.225431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.235679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.235732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.235749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.245785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.245817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.245833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.254748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.254800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.254831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.263833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.263863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.263879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.272900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.272947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.272966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.282367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.282401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.282420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.291156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.291191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.291208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.300217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.300250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.300269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.310359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.310394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.320361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.320396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.320415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.330430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.330465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.330484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.422 [2024-07-22 12:27:31.340130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.422 [2024-07-22 12:27:31.340164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.422 [2024-07-22 12:27:31.340183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.423 [2024-07-22 12:27:31.349185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.423 [2024-07-22 12:27:31.349220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.423 [2024-07-22 12:27:31.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.358101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.358135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.358153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.366997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.367030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.367049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.375899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.375943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.375962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.385076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.385109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.385127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.394000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.394033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.394059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.403389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.403422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.403441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.411831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.411861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.411878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.421209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.421245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.421263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.431453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.431488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.431507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.441521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.441555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.441574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.450723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.450754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.450771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.460258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.460293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.460311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.469772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.469818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.469834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.479495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.479536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.479556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.488883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.488931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.488950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.498151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.498185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.498204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.508152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.508187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.508206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.517781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.517812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.517830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.527204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.527239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.527257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.536423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.536475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.545686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.545731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.545747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.555249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.555284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.555302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.564597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.564660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.564678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.573734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.573765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.573782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.583309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.583343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.583362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.592375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.592409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.592427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.602054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.602089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.602108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.682 [2024-07-22 12:27:31.611689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.682 [2024-07-22 12:27:31.611721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.682 [2024-07-22 12:27:31.611738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.620735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.620766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.620782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.630332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.630366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.630384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.640114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.640149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.640176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.649691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.649724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.649741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.659204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.659239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.659258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.669466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.669501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.669521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.678788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.678819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.678836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.688164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.688198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.688217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.697770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.697815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.697831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.707290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.707324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.707343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.716411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.716445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.716464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.726083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.726124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.726144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.735513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.735547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.735565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.744955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.744990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.745009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.754523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.754558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.754577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.764133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.764169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.764188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.773629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.773677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.773694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.778531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.778564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.778583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.787859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.787888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.787920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.797523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.797557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.797582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.806730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.806760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.806777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.816352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.816386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.816405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.825872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.825921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.825940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.941 [2024-07-22 12:27:31.834898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43f00) 00:32:23.941 [2024-07-22 12:27:31.834927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.941 [2024-07-22 12:27:31.834960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.941 00:32:23.941 Latency(us) 00:32:23.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.941 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:23.941 nvme0n1 : 2.00 3276.55 409.57 0.00 0.00 4878.63 1195.43 11699.39 00:32:23.941 =================================================================================================================== 00:32:23.941 Total : 3276.55 409.57 0.00 0.00 4878.63 1195.43 11699.39 00:32:23.941 0 00:32:23.941 12:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:23.941 12:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:23.941 12:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:23.941 | .driver_specific 00:32:23.941 | .nvme_error 00:32:23.941 | .status_code 00:32:23.941 | .command_transient_transport_error' 00:32:23.941 12:27:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1138925 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1138925 ']' 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1138925 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.198 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1138925 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1138925' 00:32:24.455 killing process with pid 1138925 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1138925 00:32:24.455 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.455 00:32:24.455 Latency(us) 00:32:24.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.455 =================================================================================================================== 00:32:24.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1138925 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1139439 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1139439 /var/tmp/bperf.sock 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1139439 ']' 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:24.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:24.455 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:24.712 [2024-07-22 12:27:32.407464] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:24.712 [2024-07-22 12:27:32.407541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139439 ] 00:32:24.712 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.712 [2024-07-22 12:27:32.437792] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:24.712 [2024-07-22 12:27:32.469240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.712 [2024-07-22 12:27:32.560952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.969 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.969 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:24.969 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:24.969 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:25.226 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:25.226 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.226 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:25.226 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.226 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.226 12:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.482 nvme0n1 00:32:25.482 12:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:25.482 12:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.482 12:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:25.482 12:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.482 12:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:25.482 12:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:25.739 Running I/O for 2 seconds... 00:32:25.739 [2024-07-22 12:27:33.495805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e6738 00:32:25.739 [2024-07-22 12:27:33.497001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.739 [2024-07-22 12:27:33.497041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:25.739 [2024-07-22 12:27:33.509304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190feb58 00:32:25.739 [2024-07-22 12:27:33.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.739 [2024-07-22 12:27:33.510518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:25.739 [2024-07-22 12:27:33.522875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ed920 00:32:25.739 [2024-07-22 12:27:33.524011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.739 [2024-07-22 12:27:33.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:25.739 [2024-07-22 12:27:33.534380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f5be8 00:32:25.739 [2024-07-22 12:27:33.535373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.739 [2024-07-22 12:27:33.535419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:25.739 [2024-07-22 12:27:33.549183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fa3a0 00:32:25.740 [2024-07-22 12:27:33.550466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.550496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.561702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fa3a0 00:32:25.740 [2024-07-22 12:27:33.562960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.562989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.574858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3d08 00:32:25.740 [2024-07-22 12:27:33.576114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.576144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.589131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:25.740 [2024-07-22 12:27:33.590577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.590609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.601662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:25.740 [2024-07-22 12:27:33.603151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.603311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.613325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f57b0 00:32:25.740 [2024-07-22 12:27:33.615574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.615607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.625889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3d08 00:32:25.740 [2024-07-22 12:27:33.626867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.626895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.639112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fb8b8 00:32:25.740 [2024-07-22 12:27:33.640897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.640988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.650701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f7100 00:32:25.740 [2024-07-22 12:27:33.651641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.651829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:25.740 [2024-07-22 12:27:33.665983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f20d8 00:32:25.740 [2024-07-22 12:27:33.667364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.740 [2024-07-22 12:27:33.667394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.678416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ed4e8 00:32:25.997 [2024-07-22 12:27:33.679813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.692273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190eff18 00:32:25.997 [2024-07-22 12:27:33.694196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.694244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.704497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ebb98 00:32:25.997 [2024-07-22 12:27:33.705970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.706145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.718571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e6300 00:32:25.997 [2024-07-22 12:27:33.720815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.720847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.731259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fda78 00:32:25.997 [2024-07-22 12:27:33.733487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.733519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.740791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f8a50 00:32:25.997 [2024-07-22 12:27:33.741714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.741741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.753484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190de470 00:32:25.997 [2024-07-22 12:27:33.754556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.767451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ddc00 00:32:25.997 [2024-07-22 12:27:33.769092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.769181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.779593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ec840 00:32:25.997 [2024-07-22 12:27:33.780850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.780884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.791500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:25.997 [2024-07-22 12:27:33.792389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.792435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.804692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e23b8 00:32:25.997 [2024-07-22 12:27:33.805754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.997 [2024-07-22 12:27:33.805780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:25.997 [2024-07-22 12:27:33.819509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f1868 00:32:25.997 [2024-07-22 12:27:33.821251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.821323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.832155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e6b70 00:32:25.998 [2024-07-22 12:27:33.833947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.834121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.842710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ebfd0 00:32:25.998 [2024-07-22 12:27:33.843630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.843792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.856492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e5ec8 00:32:25.998 [2024-07-22 12:27:33.857838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.857959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.869904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f8a50 00:32:25.998 [2024-07-22 12:27:33.871569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.871598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.882357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e0a68 00:32:25.998 [2024-07-22 12:27:33.883850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.884012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.895821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190feb58 00:32:25.998 [2024-07-22 12:27:33.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.897590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.906673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190eff18 00:32:25.998 [2024-07-22 12:27:33.907976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.908006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:25.998 [2024-07-22 12:27:33.918429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fc998 00:32:25.998 [2024-07-22 12:27:33.919498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.998 [2024-07-22 12:27:33.919544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:26.260 [2024-07-22 12:27:33.932682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3d08 00:32:26.261 [2024-07-22 12:27:33.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:33.933987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:33.945294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3d08 00:32:26.261 [2024-07-22 12:27:33.946566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:33.946599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:33.957853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3d08 00:32:26.261 [2024-07-22 12:27:33.959114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:33.959146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:33.971985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3d08 00:32:26.261 [2024-07-22 12:27:33.973888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:33.973931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:33.984595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ea680 00:32:26.261 [2024-07-22 12:27:33.986268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:33.986394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:33.996444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3060 00:32:26.261 [2024-07-22 12:27:33.997874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:33.997900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.009129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:26.261 [2024-07-22 12:27:34.010589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.010774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.021723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:26.261 [2024-07-22 12:27:34.023137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.023170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.034319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:26.261 [2024-07-22 12:27:34.035798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.035826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.046866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:26.261 [2024-07-22 12:27:34.048318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.048481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.059319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0bc0 00:32:26.261 [2024-07-22 12:27:34.060802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.060981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.073424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fa3a0 00:32:26.261 [2024-07-22 12:27:34.075588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.075730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.085179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e23b8 00:32:26.261 [2024-07-22 12:27:34.086829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.086858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.098088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f5be8 00:32:26.261 [2024-07-22 12:27:34.099571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.099604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.108879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f5be8 00:32:26.261 [2024-07-22 12:27:34.109750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.109783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.122019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f2510 00:32:26.261 [2024-07-22 12:27:34.122863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.122889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.134896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f9b30 00:32:26.261 [2024-07-22 12:27:34.135765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.135794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.147939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e8d30 00:32:26.261 [2024-07-22 12:27:34.149101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.149172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.161795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f4f40 00:32:26.261 [2024-07-22 12:27:34.163177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.163224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.174358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ed0b0 00:32:26.261 [2024-07-22 12:27:34.175824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.261 [2024-07-22 12:27:34.175856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.261 [2024-07-22 12:27:34.187289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190df550 00:32:26.524 [2024-07-22 12:27:34.188998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.524 [2024-07-22 12:27:34.189080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:26.524 [2024-07-22 12:27:34.198206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fda78 00:32:26.524 [2024-07-22 12:27:34.199298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.524 [2024-07-22 12:27:34.199344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:26.524 [2024-07-22 12:27:34.212065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f0350 00:32:26.524 [2024-07-22 12:27:34.213519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.524 [2024-07-22 12:27:34.213567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:26.524 [2024-07-22 12:27:34.222530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ee5c8 00:32:26.524 [2024-07-22 12:27:34.223417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.524 [2024-07-22 12:27:34.223542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:26.524 [2024-07-22 12:27:34.237528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f3e60 00:32:26.524 [2024-07-22 12:27:34.239220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.239266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.251132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f4f40 00:32:26.525 [2024-07-22 12:27:34.253426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.253459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.264382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.265843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.265872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.276868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.278303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.278336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.289331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.290965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.291001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.301843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.303258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.303419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.314240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.315710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.315872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.326840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.328258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.328305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.339265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f96f8 00:32:26.525 [2024-07-22 12:27:34.340728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.340884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.353459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f92c0 00:32:26.525 [2024-07-22 12:27:34.355531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.355579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.363123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190dfdc0 00:32:26.525 [2024-07-22 12:27:34.364205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.364236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.376537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fd640 00:32:26.525 [2024-07-22 12:27:34.377770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.377802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.388964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fd640 00:32:26.525 [2024-07-22 12:27:34.390149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.390178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.401405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e5a90 00:32:26.525 [2024-07-22 12:27:34.402609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.402646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.414510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ed0b0 00:32:26.525 [2024-07-22 12:27:34.415601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.415639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.426951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f4298 00:32:26.525 [2024-07-22 12:27:34.427992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.428021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.439368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ec840 00:32:26.525 [2024-07-22 12:27:34.440560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.440599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:26.525 [2024-07-22 12:27:34.452397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fc998 00:32:26.525 [2024-07-22 12:27:34.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.525 [2024-07-22 12:27:34.453763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.465396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e9e10 00:32:26.781 [2024-07-22 12:27:34.466732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.466762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.478602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e0ea0 00:32:26.781 [2024-07-22 12:27:34.479898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.479942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.491693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e49b0 00:32:26.781 [2024-07-22 12:27:34.492935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.492978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.504699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fa3a0 00:32:26.781 [2024-07-22 12:27:34.505928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.505961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.517805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ee190 00:32:26.781 [2024-07-22 12:27:34.519089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.519171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.530884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fda78 00:32:26.781 [2024-07-22 12:27:34.532155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.532188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.543771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e5a90 00:32:26.781 [2024-07-22 12:27:34.545344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.545378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.556240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190de038 00:32:26.781 [2024-07-22 12:27:34.557110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.557144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.568866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fb480 00:32:26.781 [2024-07-22 12:27:34.569682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.569713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.581528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f7538 00:32:26.781 [2024-07-22 12:27:34.582484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.582513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.596620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190de470 00:32:26.781 [2024-07-22 12:27:34.598462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.598491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.609189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ee190 00:32:26.781 [2024-07-22 12:27:34.611000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.611029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.621012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ff3c8 00:32:26.781 [2024-07-22 12:27:34.622658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.622687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.633445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190feb58 00:32:26.781 [2024-07-22 12:27:34.635002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.635031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.646427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f4298 00:32:26.781 [2024-07-22 12:27:34.647994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.648026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.659450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fcdd0 00:32:26.781 [2024-07-22 12:27:34.660982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.781 [2024-07-22 12:27:34.661014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:26.781 [2024-07-22 12:27:34.672420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f81e0 00:32:26.781 [2024-07-22 12:27:34.673931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.782 [2024-07-22 12:27:34.673959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:26.782 [2024-07-22 12:27:34.685455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f2d80 00:32:26.782 [2024-07-22 12:27:34.686945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.782 [2024-07-22 12:27:34.686992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:26.782 [2024-07-22 12:27:34.698451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f6cc8 00:32:26.782 [2024-07-22 12:27:34.699940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.782 [2024-07-22 12:27:34.699973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:26.782 [2024-07-22 12:27:34.711493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f5378 00:32:27.039 [2024-07-22 12:27:34.712836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.712865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.724468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e49b0 00:32:27.039 [2024-07-22 12:27:34.725901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.725948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.737541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e6300 00:32:27.039 [2024-07-22 12:27:34.738959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.739002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.750633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190eea00 00:32:27.039 [2024-07-22 12:27:34.752022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.752051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.763730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190dfdc0 00:32:27.039 [2024-07-22 12:27:34.765074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.765107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.776677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f7970 00:32:27.039 [2024-07-22 12:27:34.778025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.778063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.789665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e0630 00:32:27.039 [2024-07-22 12:27:34.790968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.791005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.802605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e3498 00:32:27.039 [2024-07-22 12:27:34.803901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.803947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.815571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e4140 00:32:27.039 [2024-07-22 12:27:34.816828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.816861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.828582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f9b30 00:32:27.039 [2024-07-22 12:27:34.829838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.829866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.841804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190dece0 00:32:27.039 [2024-07-22 12:27:34.843017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.843049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.854843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f9f68 00:32:27.039 [2024-07-22 12:27:34.856048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.856081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.867953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ec840 00:32:27.039 [2024-07-22 12:27:34.869134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.869168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.881090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e0ea0 00:32:27.039 [2024-07-22 12:27:34.882258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.882292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.894223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190f7100 00:32:27.039 [2024-07-22 12:27:34.895355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.895387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.907320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fc998 00:32:27.039 [2024-07-22 12:27:34.908431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.908464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.920383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e27f0 00:32:27.039 [2024-07-22 12:27:34.921508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.921541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.933517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e5220 00:32:27.039 [2024-07-22 12:27:34.934621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.934669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.946630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190de470 00:32:27.039 [2024-07-22 12:27:34.947690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.947720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:27.039 [2024-07-22 12:27:34.959633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.039 [2024-07-22 12:27:34.960649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.039 [2024-07-22 12:27:34.960695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:34.972765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ecc78 00:32:27.298 [2024-07-22 12:27:34.973777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:34.973808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:34.985875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190ddc00 00:32:27.298 [2024-07-22 12:27:34.986868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:34.986896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:34.998939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190e7c50 00:32:27.298 [2024-07-22 12:27:34.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:34.999949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.013948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.014457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.014491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.027771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.028095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.028129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.041576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.055531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.055858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.055887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.069323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.069691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.069733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.083265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.083591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.083633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.097176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.097513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.111168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.111490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.111523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.125075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.125399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.125432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.138911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.139250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.139283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.152799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.153121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.153153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.166588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.166916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.166947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.180506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.180859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.180888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.194344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.194683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.194726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.208210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.208533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.208566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.298 [2024-07-22 12:27:35.222098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.298 [2024-07-22 12:27:35.222425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.298 [2024-07-22 12:27:35.222457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.235981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.236306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.236338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.249766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.250092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.250130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.263534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.263836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.263925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.277436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.277778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.291307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.291574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.291607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.305078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.305410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.305442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.318949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.319285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.319318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.332812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.333135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.333168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.346603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.346943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.346975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.360520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.360873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.360901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.374441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.374787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.374815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.388355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.388693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.388736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.402154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.402423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.402456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.416043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.416366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.416399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.430003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.430327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.430359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.443870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.444210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.444243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.457780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.458118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.458150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.471799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.472124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.472156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 [2024-07-22 12:27:35.485575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22895c0) with pdu=0x2000190fac10 00:32:27.565 [2024-07-22 12:27:35.485930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.565 [2024-07-22 12:27:35.485959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.565 00:32:27.565 Latency(us) 00:32:27.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.565 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.565 nvme0n1 : 2.01 19483.55 76.11 0.00 0.00 6553.41 3301.07 17476.27 00:32:27.565 =================================================================================================================== 00:32:27.565 Total : 19483.55 76.11 0.00 0.00 6553.41 3301.07 17476.27 00:32:27.565 0 00:32:27.822 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:27.822 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:27.822 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:27.822 | .driver_specific 00:32:27.822 | .nvme_error 00:32:27.822 | .status_code 00:32:27.822 | .command_transient_transport_error' 00:32:27.822 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1139439 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1139439 ']' 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1139439 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1139439 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1139439' 00:32:28.080 killing process with pid 1139439 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1139439 00:32:28.080 Received shutdown signal, test time was about 2.000000 seconds 00:32:28.080 00:32:28.080 Latency(us) 00:32:28.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.080 =================================================================================================================== 00:32:28.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.080 12:27:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1139439 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1139843 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1139843 /var/tmp/bperf.sock 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1139843 ']' 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.338 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.338 [2024-07-22 12:27:36.073402] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:28.338 [2024-07-22 12:27:36.073494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139843 ] 00:32:28.338 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:28.338 Zero copy mechanism will not be used. 00:32:28.338 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.338 [2024-07-22 12:27:36.105974] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:28.338 [2024-07-22 12:27:36.134025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.338 [2024-07-22 12:27:36.218932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.596 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:28.596 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:28.596 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:28.596 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:28.853 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:28.853 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.853 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.853 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.853 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:28.853 12:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.418 nvme0n1 00:32:29.418 12:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:29.418 12:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.418 12:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:29.418 12:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.418 12:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:29.418 12:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:29.418 Zero copy mechanism will not be used. 00:32:29.418 Running I/O for 2 seconds... 00:32:29.418 [2024-07-22 12:27:37.246036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.246929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.246988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.256069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.257065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.266164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.267189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.267225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.277092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.277877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.277907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.286737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.288174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.288209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.297366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.299049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.307100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.307994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.308030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.317370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.318565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.318601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.328041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.328900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.328948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.338154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.418 [2024-07-22 12:27:37.338937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.418 [2024-07-22 12:27:37.338972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.418 [2024-07-22 12:27:37.348047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.348859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.348890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.358407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.359187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.359222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.368690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.370310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.370345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.379723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.380429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.380464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.389667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.390785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.390858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.400257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.401728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.401759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.410922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.412422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.412457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.420857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.421975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.422020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.431139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.432052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.432753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.441260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.442465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.442500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.451289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.452546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.452580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.462165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.462855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.462901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.472384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.473990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.474024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.482694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.483610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.483666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.493309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.494149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.494184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.503957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.504971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.505006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.514369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.515756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.676 [2024-07-22 12:27:37.515786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.676 [2024-07-22 12:27:37.525494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.676 [2024-07-22 12:27:37.526766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.526797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.536652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.538354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.538389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.547591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.548432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.558888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.560419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.560454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.570349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.571744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.571776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.581401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.582475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.583433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.592970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.593870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.594110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.677 [2024-07-22 12:27:37.604102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.677 [2024-07-22 12:27:37.606099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.677 [2024-07-22 12:27:37.606133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.615370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.616358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.616392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.626728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.627916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.628194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.638305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.639006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.640033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.649153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.649357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.650008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.659968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.661610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.661650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.670747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.672579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.672624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.682071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.683755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.683785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.693069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.694053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.694086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.703830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.704882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.705914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.715375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.717011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.717046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.726738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.728316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.739767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.741058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.751969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.753375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.754251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.763064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.763991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.764026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.773800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.774869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.775254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.785065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.796867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.798343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.798378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.807831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.809136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.809172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.819383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.820450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.820484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.831139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.831862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.831907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.841094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.842009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.842044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.852646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.853445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.853479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.935 [2024-07-22 12:27:37.863585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:29.935 [2024-07-22 12:27:37.864405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.935 [2024-07-22 12:27:37.865147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.193 [2024-07-22 12:27:37.875043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.876522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.876561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.886041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.887114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.887670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.896283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.897567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.897752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.907015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.907911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.908059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.918368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.919779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.919808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.929573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.930602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.931427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.940415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.941269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.941303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.951706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.953448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.953482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.962978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.963915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.964802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.975194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.975727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.975958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.986343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.987214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.988032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:37.997791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:37.998891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:37.999441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.008901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.009849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.010498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.019716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.020172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.020265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.030360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.032223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.032257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.041531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.041905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.041940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.051734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.052629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.052663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.062923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.063810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.064293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.074089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.074497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.074635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.085177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.086330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.086364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.095986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.097101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.097134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.107314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.107895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.107927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.194 [2024-07-22 12:27:38.117708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.194 [2024-07-22 12:27:38.118236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.194 [2024-07-22 12:27:38.118268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.453 [2024-07-22 12:27:38.127171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.453 [2024-07-22 12:27:38.127745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.453 [2024-07-22 12:27:38.127777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.453 [2024-07-22 12:27:38.137113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.453 [2024-07-22 12:27:38.137754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.453 [2024-07-22 12:27:38.137786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.147065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.147776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.147808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.157415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.159019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.159049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.168712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.170486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.170517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.179121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.180483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.180528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.190490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.191981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.192012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.201439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.202330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.202363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.212194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.213538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.213568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.223278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.225173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.225202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.233131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.234134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.234230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.244092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.244653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.244780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.254603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.256098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.256128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.265728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.266655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.266691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.276243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.277304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.277657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.286449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.288335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.288382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.297088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.297858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.298124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.307631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.308417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.309343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.318347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.319304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.320250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.329734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.330631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.331480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.340150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.341602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.341641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.350712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.351374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.352258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.361332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.362647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.362732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.372452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.454 [2024-07-22 12:27:38.373679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.454 [2024-07-22 12:27:38.374300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.454 [2024-07-22 12:27:38.382887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.384344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.384375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.393836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.395104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.395135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.404647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.406371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.406402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.415217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.415951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.416071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.425140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.426225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.426354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.435833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.436908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.436992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.446702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.447503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.447534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.457211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.458630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.458661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.467465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.468720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.468755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.478323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.479501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.479532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.488737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.489697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.775 [2024-07-22 12:27:38.490323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.775 [2024-07-22 12:27:38.498484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.775 [2024-07-22 12:27:38.500036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.500067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.509453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.510876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.510923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.520337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.521203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.521389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.530692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.531498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.531529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.541607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.542653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.542685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.553077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.553863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.553913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.563883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.565403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.573325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.574646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.574677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.582934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.583589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.583627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.593032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.594277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.594309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.603282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.603728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.603761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.613931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.615010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.615056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.624673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.625841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.625872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.635539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.636950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.636981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.646106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.647085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.647233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.656826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.657319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.657365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.667760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.668843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.668875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.678273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.679273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.679304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.776 [2024-07-22 12:27:38.689114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:30.776 [2024-07-22 12:27:38.690147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.776 [2024-07-22 12:27:38.690839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.699741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.700495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.710550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.711974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.712004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.721652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.722945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.722977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.732436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.733672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.733703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.742965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.743934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.743964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.753468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.753899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.754394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.763962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.765207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.765240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.774991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.775695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.775725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.785330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.786151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.787137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.795537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.796049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.796080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.806076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.807238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.807268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.817848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.818811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.819308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.828399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.829331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.829378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.838986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.840002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.840795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.850023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.064 [2024-07-22 12:27:38.851904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.064 [2024-07-22 12:27:38.851949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.064 [2024-07-22 12:27:38.860755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.861959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.862212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.871293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.873188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.873218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.881729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.882773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.883724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.892480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.893787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.903282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.904900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.904931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.913589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.914333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.915014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.924604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.925857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.926672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.935891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.937190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.937220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.947212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.948278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.948308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.957508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.958569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.959471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.967780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.968075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.968206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.978585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.979647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.979882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.065 [2024-07-22 12:27:38.989289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.065 [2024-07-22 12:27:38.989979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.065 [2024-07-22 12:27:38.990009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:38.999202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:38.999874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.000376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.010096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.011901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.011934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.021216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.021884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.021915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.032121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.033199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.033229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.042904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.044298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.044329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.053645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.054865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.054896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.064737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.066099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.066129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.075243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.076754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.076785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.086044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.087441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.087486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.096802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.098338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.098367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.106261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.107302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.108052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.116682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.117912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.117942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.127550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.128805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.128836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.138744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.139919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.139950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.149083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.150499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.150546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.159667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.160952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.160983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.170182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.171071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.171190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.180966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.182062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.182093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.191903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.193289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.193319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.202787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.203150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.203195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.213839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.215263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.224211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.225684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.225714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.323 [2024-07-22 12:27:39.234697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289760) with pdu=0x2000190fef90 00:32:31.323 [2024-07-22 12:27:39.235417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.323 [2024-07-22 12:27:39.235499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.323 00:32:31.323 Latency(us) 00:32:31.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.323 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:31.323 nvme0n1 : 2.01 2877.14 359.64 0.00 0.00 5524.35 3422.44 15243.19 00:32:31.323 =================================================================================================================== 00:32:31.324 Total : 2877.14 359.64 0.00 0.00 5524.35 3422.44 15243.19 00:32:31.324 0 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:31.581 | .driver_specific 00:32:31.581 | .nvme_error 00:32:31.581 | .status_code 00:32:31.581 | .command_transient_transport_error' 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 186 > 0 )) 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1139843 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1139843 ']' 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1139843 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:31.581 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1139843 00:32:31.838 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:31.838 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:31.838 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1139843' 00:32:31.838 killing process with pid 1139843 00:32:31.838 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1139843 00:32:31.838 Received shutdown signal, test time was about 2.000000 seconds 00:32:31.838 00:32:31.839 Latency(us) 00:32:31.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.839 =================================================================================================================== 00:32:31.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1139843 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1138482 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1138482 ']' 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1138482 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:31.839 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1138482 00:32:32.096 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.096 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.096 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1138482' 00:32:32.096 killing process with pid 1138482 00:32:32.096 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1138482 00:32:32.096 12:27:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1138482 00:32:32.096 00:32:32.096 real 0m15.265s 00:32:32.096 user 0m28.205s 00:32:32.096 sys 0m4.299s 00:32:32.096 12:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.096 12:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:32.096 ************************************ 00:32:32.096 END TEST nvmf_digest_error 00:32:32.096 ************************************ 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:32.355 rmmod nvme_tcp 00:32:32.355 rmmod nvme_fabrics 00:32:32.355 rmmod nvme_keyring 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1138482 ']' 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1138482 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1138482 ']' 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1138482 00:32:32.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1138482) - No such process 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1138482 is not found' 00:32:32.355 Process with pid 1138482 is not found 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.355 12:27:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.256 12:27:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:34.256 00:32:34.256 real 0m34.722s 00:32:34.256 user 0m59.266s 00:32:34.256 sys 0m9.867s 00:32:34.256 12:27:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:34.256 12:27:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.256 ************************************ 00:32:34.256 END TEST nvmf_digest 00:32:34.256 ************************************ 00:32:34.256 12:27:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:34.256 12:27:42 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:32:34.256 12:27:42 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:32:34.256 12:27:42 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:32:34.256 12:27:42 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:34.256 12:27:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:34.256 12:27:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.256 12:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.522 ************************************ 00:32:34.522 START TEST nvmf_bdevperf 00:32:34.522 ************************************ 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:34.522 * Looking for test storage... 00:32:34.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:34.522 12:27:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:37.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:37.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:37.054 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:37.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:37.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:37.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:32:37.055 00:32:37.055 --- 10.0.0.2 ping statistics --- 00:32:37.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.055 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:37.055 00:32:37.055 --- 10.0.0.1 ping statistics --- 00:32:37.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.055 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1142197 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1142197 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1142197 ']' 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 [2024-07-22 12:27:44.632006] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:37.055 [2024-07-22 12:27:44.632095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.055 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.055 [2024-07-22 12:27:44.670491] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:37.055 [2024-07-22 12:27:44.704095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:37.055 [2024-07-22 12:27:44.795037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.055 [2024-07-22 12:27:44.795093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.055 [2024-07-22 12:27:44.795110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.055 [2024-07-22 12:27:44.795121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.055 [2024-07-22 12:27:44.795130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.055 [2024-07-22 12:27:44.795216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.055 [2024-07-22 12:27:44.795276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.055 [2024-07-22 12:27:44.795278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 [2024-07-22 12:27:44.923515] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 Malloc0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.055 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.313 [2024-07-22 12:27:44.988406] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:37.313 { 00:32:37.313 "params": { 00:32:37.313 "name": "Nvme$subsystem", 00:32:37.313 "trtype": "$TEST_TRANSPORT", 00:32:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.313 "adrfam": "ipv4", 00:32:37.313 "trsvcid": "$NVMF_PORT", 00:32:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.313 "hdgst": ${hdgst:-false}, 00:32:37.313 "ddgst": ${ddgst:-false} 00:32:37.313 }, 00:32:37.313 "method": "bdev_nvme_attach_controller" 00:32:37.313 } 00:32:37.313 EOF 00:32:37.313 )") 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:37.313 12:27:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:37.313 "params": { 00:32:37.313 "name": "Nvme1", 00:32:37.313 "trtype": "tcp", 00:32:37.313 "traddr": "10.0.0.2", 00:32:37.313 "adrfam": "ipv4", 00:32:37.313 "trsvcid": "4420", 00:32:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:37.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:37.313 "hdgst": false, 00:32:37.313 "ddgst": false 00:32:37.313 }, 00:32:37.313 "method": "bdev_nvme_attach_controller" 00:32:37.313 }' 00:32:37.313 [2024-07-22 12:27:45.032935] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:37.313 [2024-07-22 12:27:45.033021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142340 ] 00:32:37.313 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.314 [2024-07-22 12:27:45.065631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:37.314 [2024-07-22 12:27:45.093560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.314 [2024-07-22 12:27:45.179092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.879 Running I/O for 1 seconds... 00:32:38.811 00:32:38.811 Latency(us) 00:32:38.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.811 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:38.811 Verification LBA range: start 0x0 length 0x4000 00:32:38.811 Nvme1n1 : 1.01 8617.25 33.66 0.00 0.00 14791.06 2949.12 13592.65 00:32:38.811 =================================================================================================================== 00:32:38.811 Total : 8617.25 33.66 0.00 0.00 14791.06 2949.12 13592.65 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1142481 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:39.069 { 00:32:39.069 "params": { 00:32:39.069 "name": "Nvme$subsystem", 00:32:39.069 "trtype": "$TEST_TRANSPORT", 00:32:39.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.069 "adrfam": "ipv4", 00:32:39.069 "trsvcid": "$NVMF_PORT", 00:32:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.069 "hdgst": ${hdgst:-false}, 00:32:39.069 "ddgst": ${ddgst:-false} 00:32:39.069 }, 00:32:39.069 "method": "bdev_nvme_attach_controller" 00:32:39.069 } 00:32:39.069 EOF 00:32:39.069 )") 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:39.069 12:27:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:39.069 "params": { 00:32:39.069 "name": "Nvme1", 00:32:39.069 "trtype": "tcp", 00:32:39.069 "traddr": "10.0.0.2", 00:32:39.069 "adrfam": "ipv4", 00:32:39.069 "trsvcid": "4420", 00:32:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.069 "hdgst": false, 00:32:39.069 "ddgst": false 00:32:39.069 }, 00:32:39.069 "method": "bdev_nvme_attach_controller" 00:32:39.069 }' 00:32:39.069 [2024-07-22 12:27:46.804442] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:39.069 [2024-07-22 12:27:46.804536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142481 ] 00:32:39.069 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.070 [2024-07-22 12:27:46.837074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:39.070 [2024-07-22 12:27:46.865291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.070 [2024-07-22 12:27:46.949669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.327 Running I/O for 15 seconds... 00:32:41.853 12:27:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1142197 00:32:41.853 12:27:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:41.853 [2024-07-22 12:27:49.771128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.771982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.771999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.853 [2024-07-22 12:27:49.772421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.853 [2024-07-22 12:27:49.772443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.772979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.772996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.773012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.773044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.854 [2024-07-22 12:27:49.773076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.773979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.773996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.854 [2024-07-22 12:27:49.774546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.854 [2024-07-22 12:27:49.774567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.774977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.774994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.855 [2024-07-22 12:27:49.775540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a14e50 is same with the state(5) to be set 00:32:41.855 [2024-07-22 12:27:49.775576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:41.855 [2024-07-22 12:27:49.775589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:41.855 [2024-07-22 12:27:49.775610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44656 len:8 PRP1 0x0 PRP2 0x0 00:32:41.855 [2024-07-22 12:27:49.775632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.855 [2024-07-22 12:27:49.775709] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a14e50 was disconnected and freed. reset controller. 00:32:41.855 [2024-07-22 12:27:49.779566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.855 [2024-07-22 12:27:49.779664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:41.855 [2024-07-22 12:27:49.780427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.855 [2024-07-22 12:27:49.780478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:41.855 [2024-07-22 12:27:49.780501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:41.855 [2024-07-22 12:27:49.780767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.781013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.781037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.781057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.784622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.793874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.794360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.794401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.794419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.794669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.794918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.794941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.794957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.798517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.807759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.808188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.808216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.808231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.808468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.808721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.808745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.808761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.812311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.821750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.822187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.822219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.822237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.822474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.822728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.822752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.822768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.826317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.835754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.836175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.836206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.836224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.836462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.836715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.836740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.836756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.840306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.849743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.850165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.850196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.850214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.850452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.850705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.850729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.850746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.854296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.863736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.864166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.864207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.864225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.864470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.864722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.864746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.114 [2024-07-22 12:27:49.864762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.114 [2024-07-22 12:27:49.868313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.114 [2024-07-22 12:27:49.877749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.114 [2024-07-22 12:27:49.878174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.114 [2024-07-22 12:27:49.878205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.114 [2024-07-22 12:27:49.878224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.114 [2024-07-22 12:27:49.878461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.114 [2024-07-22 12:27:49.878715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.114 [2024-07-22 12:27:49.878739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.878755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.882303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.891740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.892170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.892201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.892224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.892463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.892715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.892739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.892755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.896303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.905737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.906167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.906194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.906210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.906452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.906704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.906729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.906745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.910296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.919736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.920137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.920168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.920187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.920424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.920675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.920700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.920716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.924268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.933700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.934115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.934146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.934167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.934404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.934656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.934686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.934702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.938252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.947687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.948107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.948139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.948157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.948394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.948646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.948670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.948686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.952236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.961671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.962076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.962107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.962135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.962393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.962645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.962670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.962686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.966236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.975670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.976113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.976145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.976163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.976400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.115 [2024-07-22 12:27:49.976653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.115 [2024-07-22 12:27:49.976677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.115 [2024-07-22 12:27:49.976694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.115 [2024-07-22 12:27:49.980247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.115 [2024-07-22 12:27:49.989699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.115 [2024-07-22 12:27:49.990143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.115 [2024-07-22 12:27:49.990175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.115 [2024-07-22 12:27:49.990193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.115 [2024-07-22 12:27:49.990431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.116 [2024-07-22 12:27:49.990681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.116 [2024-07-22 12:27:49.990706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.116 [2024-07-22 12:27:49.990721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.116 [2024-07-22 12:27:49.994274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.116 [2024-07-22 12:27:50.004293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.116 [2024-07-22 12:27:50.004815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.116 [2024-07-22 12:27:50.004872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.116 [2024-07-22 12:27:50.004910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.116 [2024-07-22 12:27:50.005261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.116 [2024-07-22 12:27:50.005597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.116 [2024-07-22 12:27:50.005640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.116 [2024-07-22 12:27:50.005660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.116 [2024-07-22 12:27:50.009265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.116 [2024-07-22 12:27:50.018367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.116 [2024-07-22 12:27:50.018774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.116 [2024-07-22 12:27:50.018808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.116 [2024-07-22 12:27:50.018827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.116 [2024-07-22 12:27:50.019066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.116 [2024-07-22 12:27:50.019309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.116 [2024-07-22 12:27:50.019334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.116 [2024-07-22 12:27:50.019350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.116 [2024-07-22 12:27:50.022942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.116 [2024-07-22 12:27:50.032395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.116 [2024-07-22 12:27:50.032811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.116 [2024-07-22 12:27:50.032844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.116 [2024-07-22 12:27:50.032864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.116 [2024-07-22 12:27:50.033111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.116 [2024-07-22 12:27:50.033354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.116 [2024-07-22 12:27:50.033379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.116 [2024-07-22 12:27:50.033395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.116 [2024-07-22 12:27:50.036964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.046408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.046844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.046877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.046895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.047133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.047375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.047399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.047416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.050990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.060429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.060862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.060894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.060912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.061150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.061391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.061415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.061430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.064996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.074437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.074841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.074873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.074891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.075129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.075371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.075395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.075417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.078983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.088421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.088852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.088883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.088901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.089138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.089380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.089404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.089420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.092986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.102421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.102852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.102884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.102902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.103140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.103381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.103405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.103421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.106992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.116440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.116874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.116907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.116926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.117165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.117407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.117432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.117448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.121016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.130450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.130884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.130921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.130939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.131177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.131418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.131443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.131459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.135026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.144464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.144894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.144926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.144944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.145182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.145423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.145447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.145464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.149030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.158465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.158880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.158912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.158930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.159168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.159409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.375 [2024-07-22 12:27:50.159433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.375 [2024-07-22 12:27:50.159449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.375 [2024-07-22 12:27:50.163017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.375 [2024-07-22 12:27:50.172451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.375 [2024-07-22 12:27:50.172855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.375 [2024-07-22 12:27:50.172887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.375 [2024-07-22 12:27:50.172906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.375 [2024-07-22 12:27:50.173144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.375 [2024-07-22 12:27:50.173390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.173416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.173432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.177004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.186443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.186869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.186902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.186921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.187160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.187400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.187425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.187441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.191009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.200447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.200873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.200906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.200925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.201162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.201403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.201428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.201443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.205012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.214445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.214849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.214881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.214900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.215138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.215380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.215405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.215421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.219007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.228454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.228892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.228924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.228942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.229179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.229421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.229445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.229460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.233025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.242468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.242897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.242927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.242946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.243183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.243424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.243449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.243464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.247029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.256485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.256866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.256899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.256918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.257157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.257399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.257422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.257438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.261006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.270463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.270848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.270888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.270913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.271152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.271395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.271420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.271436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.275006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.284495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.284908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.284941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.284959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.285198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.285438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.285463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.285480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.289044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.376 [2024-07-22 12:27:50.298489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.376 [2024-07-22 12:27:50.298958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.376 [2024-07-22 12:27:50.299012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.376 [2024-07-22 12:27:50.299031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.376 [2024-07-22 12:27:50.299269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.376 [2024-07-22 12:27:50.299512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.376 [2024-07-22 12:27:50.299536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.376 [2024-07-22 12:27:50.299552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.376 [2024-07-22 12:27:50.303123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.312367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.312784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.312816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.312834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.313072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.313314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.313346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.313363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.635 [2024-07-22 12:27:50.316936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.326377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.326807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.326839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.326857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.327096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.327337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.327361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.327377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.635 [2024-07-22 12:27:50.330942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.340392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.340833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.340864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.340883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.341121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.341364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.341388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.341405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.635 [2024-07-22 12:27:50.344980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.354230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.354664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.354696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.354715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.354953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.355195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.355218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.355234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.635 [2024-07-22 12:27:50.358813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.368069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.368491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.368523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.368543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.368793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.369035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.369059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.369075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.635 [2024-07-22 12:27:50.372644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.382116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.382540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.382571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.382589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.382835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.383077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.383100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.383116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.635 [2024-07-22 12:27:50.386681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.635 [2024-07-22 12:27:50.396132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.635 [2024-07-22 12:27:50.396525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.635 [2024-07-22 12:27:50.396556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.635 [2024-07-22 12:27:50.396574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.635 [2024-07-22 12:27:50.396822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.635 [2024-07-22 12:27:50.397065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.635 [2024-07-22 12:27:50.397089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.635 [2024-07-22 12:27:50.397104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.400679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.410157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.410576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.410609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.410642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.410882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.411125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.411149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.411165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.414738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.424020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.424443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.424476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.424495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.424746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.424989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.425013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.425029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.428595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.437857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.438234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.438268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.438287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.438527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.438787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.438811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.438827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.442390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.451872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.452245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.452277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.452295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.452532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.452783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.452814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.452831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.456393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.465872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.466270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.466302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.466321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.466559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.466810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.466837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.466852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.470415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.479890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.480282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.480315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.480334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.480572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.480822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.480847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.480862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.484417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.493859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.494252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.494285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.494303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.494542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.494793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.494818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.494833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.498387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.507844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.508285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.508318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.508337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.508575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.508828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.508853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.508878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.512439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.521700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.522129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.522180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.522199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.522436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.522690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.522716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.522733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.526288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.535662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.536085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.536135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.536154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.536392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.536645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.536670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.536686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.540248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.636 [2024-07-22 12:27:50.549483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.636 [2024-07-22 12:27:50.549948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.636 [2024-07-22 12:27:50.549999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.636 [2024-07-22 12:27:50.550017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.636 [2024-07-22 12:27:50.550261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.636 [2024-07-22 12:27:50.550503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.636 [2024-07-22 12:27:50.550527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.636 [2024-07-22 12:27:50.550543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.636 [2024-07-22 12:27:50.554111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.637 [2024-07-22 12:27:50.563341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.637 [2024-07-22 12:27:50.563770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.637 [2024-07-22 12:27:50.563803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.637 [2024-07-22 12:27:50.563822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.637 [2024-07-22 12:27:50.564060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.637 [2024-07-22 12:27:50.564302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.637 [2024-07-22 12:27:50.564326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.637 [2024-07-22 12:27:50.564342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.567910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.577353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.577782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.577814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.577832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.578071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.578311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.578336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.578352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.581920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.591355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.591820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.591870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.591888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.592126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.592367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.592391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.592413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.595983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.605222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.605696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.605749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.605785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.606023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.606263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.606287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.606302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.609872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.619103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.619549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.619598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.619628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.619870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.620110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.620135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.620151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.623713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.632943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.633362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.633394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.633413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.633664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.633906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.633931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.633947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.637503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.646948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.647369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.647406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.647425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.647677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.647919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.647944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.647960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.651513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.660956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.661350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.661382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.661401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.661651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.661892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.896 [2024-07-22 12:27:50.661917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.896 [2024-07-22 12:27:50.661932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.896 [2024-07-22 12:27:50.665487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.896 [2024-07-22 12:27:50.674935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.896 [2024-07-22 12:27:50.675374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.896 [2024-07-22 12:27:50.675405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.896 [2024-07-22 12:27:50.675424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.896 [2024-07-22 12:27:50.675676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.896 [2024-07-22 12:27:50.675917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.675941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.675957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.679512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.688752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.689147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.689179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.689197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.689434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.689696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.689721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.689737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.693292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.702738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.703157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.703188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.703206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.703443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.703699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.703724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.703741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.707296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.716749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.717142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.717174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.717192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.717429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.717683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.717708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.717724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.721279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.730756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.731176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.731208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.731226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.731463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.731718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.731744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.731760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.735320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.744764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.745181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.745213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.745231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.745468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.745723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.745748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.745764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.749319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.758761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.759177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.759209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.759227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.759464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.759719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.759745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.759761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.763315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.772756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.773219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.773250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.773269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.773507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.773760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.773784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.773800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.777354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.786598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.787023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.787055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.787080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.787319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.787562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.787586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.787602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.791186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.800456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.800874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.800908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.800927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.801166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.801408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.801433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.801449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.805017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.897 [2024-07-22 12:27:50.814455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.897 [2024-07-22 12:27:50.814875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.897 [2024-07-22 12:27:50.814909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:42.897 [2024-07-22 12:27:50.814928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:42.897 [2024-07-22 12:27:50.815167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:42.897 [2024-07-22 12:27:50.815410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.897 [2024-07-22 12:27:50.815436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.897 [2024-07-22 12:27:50.815452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.897 [2024-07-22 12:27:50.819026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.828462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.828889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.828921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.828939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.829177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.829418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.829449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.829466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.833034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.842466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.842891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.842922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.842940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.843178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.843418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.843443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.843458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.847024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.856458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.856913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.856964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.856983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.857221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.857461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.857486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.857501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.861078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.870310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.870730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.870762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.870780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.871018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.871260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.871284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.871300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.874865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.884303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.884698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.884730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.884749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.884987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.885228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.885252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.885268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.888835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.898273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.898694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.898726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.898744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.898981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.899222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.899246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.899263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.902829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.912265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.912660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.912693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.912712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.912950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.913191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.913216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.913232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.916804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.926243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.926661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.926694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.926712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.926957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.927198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.927222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.927239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.930806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.940243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.940641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.940675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.940693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.940932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.941173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.157 [2024-07-22 12:27:50.941197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.157 [2024-07-22 12:27:50.941213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.157 [2024-07-22 12:27:50.944780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.157 [2024-07-22 12:27:50.954219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.157 [2024-07-22 12:27:50.954658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.157 [2024-07-22 12:27:50.954690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.157 [2024-07-22 12:27:50.954709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.157 [2024-07-22 12:27:50.954946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.157 [2024-07-22 12:27:50.955187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:50.955212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:50.955228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:50.958793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:50.968233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:50.968663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:50.968695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:50.968713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:50.968951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:50.969192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:50.969216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:50.969238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:50.972806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:50.982246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:50.982665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:50.982697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:50.982715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:50.982954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:50.983194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:50.983218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:50.983233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:50.986799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:50.996234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:50.996649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:50.996680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:50.996698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:50.996936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:50.997178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:50.997201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:50.997217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.000780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:51.010215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:51.010637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:51.010669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:51.010687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:51.010924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:51.011167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:51.011190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:51.011205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.014764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:51.024218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:51.024636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:51.024690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:51.024709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:51.024947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:51.025188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:51.025212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:51.025227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.028787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:51.038236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:51.038635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:51.038675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:51.038694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:51.038932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:51.039174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:51.039198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:51.039213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.042777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:51.052224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:51.052662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:51.052695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:51.052713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:51.052951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:51.053193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:51.053217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:51.053233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.056795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:51.066224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:51.066676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:51.066709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:51.066727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:51.066970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:51.067211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:51.067236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:51.067252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.070815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.158 [2024-07-22 12:27:51.080241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.158 [2024-07-22 12:27:51.080661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.158 [2024-07-22 12:27:51.080694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.158 [2024-07-22 12:27:51.080712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.158 [2024-07-22 12:27:51.080951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.158 [2024-07-22 12:27:51.081193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.158 [2024-07-22 12:27:51.081217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.158 [2024-07-22 12:27:51.081234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.158 [2024-07-22 12:27:51.084797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.094228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.094594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.094634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.418 [2024-07-22 12:27:51.094654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.418 [2024-07-22 12:27:51.094892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.418 [2024-07-22 12:27:51.095134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.418 [2024-07-22 12:27:51.095159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.418 [2024-07-22 12:27:51.095175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.418 [2024-07-22 12:27:51.098735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.108165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.108582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.108622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.418 [2024-07-22 12:27:51.108643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.418 [2024-07-22 12:27:51.108881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.418 [2024-07-22 12:27:51.109122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.418 [2024-07-22 12:27:51.109146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.418 [2024-07-22 12:27:51.109167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.418 [2024-07-22 12:27:51.112727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.122157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.122588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.122626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.418 [2024-07-22 12:27:51.122647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.418 [2024-07-22 12:27:51.122884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.418 [2024-07-22 12:27:51.123127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.418 [2024-07-22 12:27:51.123150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.418 [2024-07-22 12:27:51.123165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.418 [2024-07-22 12:27:51.126725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.136152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.136547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.136579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.418 [2024-07-22 12:27:51.136597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.418 [2024-07-22 12:27:51.136843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.418 [2024-07-22 12:27:51.137085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.418 [2024-07-22 12:27:51.137109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.418 [2024-07-22 12:27:51.137125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.418 [2024-07-22 12:27:51.140682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.150116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.150537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.150569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.418 [2024-07-22 12:27:51.150587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.418 [2024-07-22 12:27:51.150836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.418 [2024-07-22 12:27:51.151078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.418 [2024-07-22 12:27:51.151103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.418 [2024-07-22 12:27:51.151119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.418 [2024-07-22 12:27:51.154677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.164099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.164499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.164536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.418 [2024-07-22 12:27:51.164555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.418 [2024-07-22 12:27:51.164806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.418 [2024-07-22 12:27:51.165047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.418 [2024-07-22 12:27:51.165071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.418 [2024-07-22 12:27:51.165088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.418 [2024-07-22 12:27:51.168647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.418 [2024-07-22 12:27:51.178070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.418 [2024-07-22 12:27:51.178491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.418 [2024-07-22 12:27:51.178523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.178541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.178791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.179033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.179057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.179073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.182631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.192056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.192480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.192512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.192530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.192778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.193019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.193044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.193060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.196617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.206043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.206439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.206471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.206490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.206740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.206988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.207013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.207030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.210579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.220014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.220405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.220437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.220455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.220705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.220946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.220970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.220986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.224538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.233989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.234416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.234448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.234467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.234716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.234957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.234981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.234997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.238545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.247979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.248399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.248431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.248450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.248699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.248941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.248966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.248982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.252541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.261978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.262394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.262426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.262445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.262693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.262936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.262959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.262975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.266530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.275971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.276391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.276423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.276441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.276688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.276928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.276952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.276968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.280516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.289968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.290388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.290420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.290439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.290687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.290928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.290953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.290969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.294524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.303966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.304385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.304417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.304440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.304688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.304929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.304953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.304968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.308518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.317967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.318389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.318421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.419 [2024-07-22 12:27:51.318439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.419 [2024-07-22 12:27:51.318686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.419 [2024-07-22 12:27:51.318927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.419 [2024-07-22 12:27:51.318952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.419 [2024-07-22 12:27:51.318968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.419 [2024-07-22 12:27:51.322514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.419 [2024-07-22 12:27:51.331954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.419 [2024-07-22 12:27:51.332391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.419 [2024-07-22 12:27:51.332423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.420 [2024-07-22 12:27:51.332442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.420 [2024-07-22 12:27:51.332691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.420 [2024-07-22 12:27:51.332934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.420 [2024-07-22 12:27:51.332959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.420 [2024-07-22 12:27:51.332975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.420 [2024-07-22 12:27:51.336531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.420 [2024-07-22 12:27:51.345967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.420 [2024-07-22 12:27:51.346360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.420 [2024-07-22 12:27:51.346392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.420 [2024-07-22 12:27:51.346412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.420 [2024-07-22 12:27:51.346661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.420 [2024-07-22 12:27:51.346902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.420 [2024-07-22 12:27:51.346932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.420 [2024-07-22 12:27:51.346948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.679 [2024-07-22 12:27:51.350501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.679 [2024-07-22 12:27:51.359939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.679 [2024-07-22 12:27:51.360356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.679 [2024-07-22 12:27:51.360389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.679 [2024-07-22 12:27:51.360407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.679 [2024-07-22 12:27:51.360657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.679 [2024-07-22 12:27:51.360908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.679 [2024-07-22 12:27:51.360933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.679 [2024-07-22 12:27:51.360949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.364501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.373765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.374193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.374224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.374242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.374480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.374730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.374756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.374772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.378319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.387760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.388178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.388209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.388227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.388464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.388716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.388740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.388757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.392307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.401755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.402182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.402213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.402231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.402468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.402719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.402743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.402760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.406311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.415756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.416172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.416204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.416222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.416459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.416709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.416734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.416750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.420300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.429754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.430170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.430201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.430219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.430457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.430734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.430771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.430788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.434343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.443575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.443969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.444001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.444020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.444264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.444505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.444530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.444546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.448106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.457542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.457967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.457999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.458017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.458255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.458496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.458520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.458536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.462098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.471523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.471946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.471977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.471996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.472234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.472475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.472499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.472515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.476072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.485504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.485936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.485968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.485986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.486223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.486465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.486488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.486510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.490094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.499525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.499934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.499965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.499984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.500221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.500463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.500487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.680 [2024-07-22 12:27:51.500503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.680 [2024-07-22 12:27:51.504062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.680 [2024-07-22 12:27:51.513515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.680 [2024-07-22 12:27:51.513902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.680 [2024-07-22 12:27:51.513934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.680 [2024-07-22 12:27:51.513952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.680 [2024-07-22 12:27:51.514190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.680 [2024-07-22 12:27:51.514436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.680 [2024-07-22 12:27:51.514460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.514475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.517847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.526986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.527397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.527425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.527441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.527720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.527946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.527966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.527979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.531100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.540352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.540721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.540749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.540766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.540993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.541206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.541225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.541237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.544341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.553817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.554222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.554250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.554266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.554519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.554750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.554771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.554785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.557832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.567159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.567512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.567539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.567554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.567818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.568053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.568072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.568084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.571179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.580564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.581029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.581071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.581088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.581328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.581525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.581544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.581557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.584572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.593995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.594400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.594427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.594443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.594708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.594907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.594926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.594938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.681 [2024-07-22 12:27:51.597952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.681 [2024-07-22 12:27:51.607564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.681 [2024-07-22 12:27:51.607943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.681 [2024-07-22 12:27:51.607970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.681 [2024-07-22 12:27:51.607985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.681 [2024-07-22 12:27:51.608205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.681 [2024-07-22 12:27:51.608406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.681 [2024-07-22 12:27:51.608425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.681 [2024-07-22 12:27:51.608437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.941 [2024-07-22 12:27:51.611657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.941 [2024-07-22 12:27:51.620890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.941 [2024-07-22 12:27:51.621293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.941 [2024-07-22 12:27:51.621322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.941 [2024-07-22 12:27:51.621337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.941 [2024-07-22 12:27:51.621599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.941 [2024-07-22 12:27:51.621804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.941 [2024-07-22 12:27:51.621824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.941 [2024-07-22 12:27:51.621836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.941 [2024-07-22 12:27:51.624811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.634124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.634539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.634567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.634599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.634860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.635068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.635087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.635099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.638104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.647472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.647896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.647923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.647938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.648172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.648369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.648388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.648400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.651374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.660827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.661244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.661270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.661300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.661541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.661767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.661787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.661800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.664768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.674000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.674369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.674415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.674430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.674662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.674872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.674907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.674920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.677884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.687310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.687662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.687690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.687706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.687913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.688125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.688144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.688156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.691151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.700619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.701019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.701046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.701061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.701282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.701494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.701513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.701526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.704491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.713898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.714297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.714323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.714353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.714586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.714820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.714841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.714853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.717781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.727171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.727604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.727651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.727667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.727902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.728099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.728118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.728130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.731104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.740348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.740814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.740842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.740858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.741112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.741310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.741328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.741340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.744297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.753516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.753980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.754022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.754038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.754278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.754475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.754494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.754506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.942 [2024-07-22 12:27:51.757535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.942 [2024-07-22 12:27:51.766811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.942 [2024-07-22 12:27:51.767232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.942 [2024-07-22 12:27:51.767259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.942 [2024-07-22 12:27:51.767290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.942 [2024-07-22 12:27:51.767527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.942 [2024-07-22 12:27:51.767751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.942 [2024-07-22 12:27:51.767772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.942 [2024-07-22 12:27:51.767784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.770747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.780139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.780539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.780566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.780582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.780831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.781065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.781084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.781096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.784151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.793673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.794103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.794131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.794147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.794387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.794604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.794632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.794647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.797789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.806949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.807361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.807386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.807421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.807662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.807860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.807879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.807891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.810858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.820198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.820600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.820634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.820651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.820892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.821104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.821123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.821135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.824066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.833476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.833949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.833976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.833991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.834242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.834439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.834457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.834469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.837429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.846727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.847156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.847198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.847214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.847458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.847678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.847702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.847715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.850674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.943 [2024-07-22 12:27:51.859924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.943 [2024-07-22 12:27:51.860302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.943 [2024-07-22 12:27:51.860343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:43.943 [2024-07-22 12:27:51.860359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:43.943 [2024-07-22 12:27:51.860606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:43.943 [2024-07-22 12:27:51.860832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.943 [2024-07-22 12:27:51.860851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.943 [2024-07-22 12:27:51.860863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.943 [2024-07-22 12:27:51.863830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.204 [2024-07-22 12:27:51.873359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.204 [2024-07-22 12:27:51.873789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.204 [2024-07-22 12:27:51.873816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.204 [2024-07-22 12:27:51.873831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.204 [2024-07-22 12:27:51.874065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.204 [2024-07-22 12:27:51.874262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.204 [2024-07-22 12:27:51.874280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.204 [2024-07-22 12:27:51.874293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.204 [2024-07-22 12:27:51.877560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.204 [2024-07-22 12:27:51.886563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.204 [2024-07-22 12:27:51.887010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.204 [2024-07-22 12:27:51.887038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.204 [2024-07-22 12:27:51.887069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.204 [2024-07-22 12:27:51.887322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.204 [2024-07-22 12:27:51.887519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.204 [2024-07-22 12:27:51.887537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.204 [2024-07-22 12:27:51.887549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.204 [2024-07-22 12:27:51.890545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.204 [2024-07-22 12:27:51.899859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.204 [2024-07-22 12:27:51.900314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.204 [2024-07-22 12:27:51.900357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.204 [2024-07-22 12:27:51.900372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.204 [2024-07-22 12:27:51.900650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.204 [2024-07-22 12:27:51.900854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.204 [2024-07-22 12:27:51.900873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.204 [2024-07-22 12:27:51.900886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.204 [2024-07-22 12:27:51.903849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.204 [2024-07-22 12:27:51.913073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.204 [2024-07-22 12:27:51.913488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.204 [2024-07-22 12:27:51.913529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.204 [2024-07-22 12:27:51.913545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.204 [2024-07-22 12:27:51.913784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.204 [2024-07-22 12:27:51.914020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.204 [2024-07-22 12:27:51.914039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.204 [2024-07-22 12:27:51.914052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.204 [2024-07-22 12:27:51.917021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.204 [2024-07-22 12:27:51.926259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.204 [2024-07-22 12:27:51.926691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.204 [2024-07-22 12:27:51.926719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.204 [2024-07-22 12:27:51.926735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.204 [2024-07-22 12:27:51.926975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.204 [2024-07-22 12:27:51.927171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.204 [2024-07-22 12:27:51.927190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.204 [2024-07-22 12:27:51.927203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.204 [2024-07-22 12:27:51.930175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.204 [2024-07-22 12:27:51.939421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.204 [2024-07-22 12:27:51.939826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.204 [2024-07-22 12:27:51.939853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:51.939868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:51.940105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:51.940303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:51.940322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:51.940334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:51.943346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:51.952644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:51.953087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:51.953129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:51.953146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:51.953401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:51.953623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:51.953642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:51.953670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:51.956673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:51.965906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:51.966306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:51.966334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:51.966350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:51.966589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:51.966828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:51.966850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:51.966863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:51.969984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:51.979090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:51.979504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:51.979531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:51.979561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:51.979785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:51.980029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:51.980048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:51.980065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:51.983028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:51.992313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:51.992752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:51.992779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:51.992795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:51.993035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:51.993232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:51.993250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:51.993263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:51.996257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.005469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.005832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:52.005859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:52.005875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:52.006096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:52.006309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:52.006328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:52.006340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:52.009311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.018806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.019229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:52.019255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:52.019271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:52.019492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:52.019723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:52.019744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:52.019772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:52.022815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.032037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.032444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:52.032472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:52.032488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:52.032742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:52.032974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:52.032994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:52.033021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:52.035997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.045472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.045904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:52.045932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:52.045948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:52.046175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:52.046388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:52.046406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:52.046419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:52.049517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.058832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.059295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:52.059322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:52.059338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:52.059578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:52.059828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:52.059849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:52.059862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:52.062850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.072087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.072496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.205 [2024-07-22 12:27:52.072522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.205 [2024-07-22 12:27:52.072536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.205 [2024-07-22 12:27:52.072802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.205 [2024-07-22 12:27:52.073029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.205 [2024-07-22 12:27:52.073048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.205 [2024-07-22 12:27:52.073060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.205 [2024-07-22 12:27:52.076024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.205 [2024-07-22 12:27:52.085267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.205 [2024-07-22 12:27:52.085671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.206 [2024-07-22 12:27:52.085701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.206 [2024-07-22 12:27:52.085717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.206 [2024-07-22 12:27:52.085946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.206 [2024-07-22 12:27:52.086178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.206 [2024-07-22 12:27:52.086197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.206 [2024-07-22 12:27:52.086209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.206 [2024-07-22 12:27:52.089183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.206 [2024-07-22 12:27:52.098468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.206 [2024-07-22 12:27:52.098879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.206 [2024-07-22 12:27:52.098907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.206 [2024-07-22 12:27:52.098938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.206 [2024-07-22 12:27:52.099204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.206 [2024-07-22 12:27:52.099401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.206 [2024-07-22 12:27:52.099420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.206 [2024-07-22 12:27:52.099432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.206 [2024-07-22 12:27:52.102399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.206 [2024-07-22 12:27:52.111799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.206 [2024-07-22 12:27:52.112186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.206 [2024-07-22 12:27:52.112213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.206 [2024-07-22 12:27:52.112229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.206 [2024-07-22 12:27:52.112469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.206 [2024-07-22 12:27:52.112726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.206 [2024-07-22 12:27:52.112747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.206 [2024-07-22 12:27:52.112760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.206 [2024-07-22 12:27:52.115754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.206 [2024-07-22 12:27:52.124987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.206 [2024-07-22 12:27:52.125452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.206 [2024-07-22 12:27:52.125479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.206 [2024-07-22 12:27:52.125495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.206 [2024-07-22 12:27:52.125764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.206 [2024-07-22 12:27:52.125974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.206 [2024-07-22 12:27:52.126008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.206 [2024-07-22 12:27:52.126020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.206 [2024-07-22 12:27:52.129161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.466 [2024-07-22 12:27:52.138406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.466 [2024-07-22 12:27:52.138788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.466 [2024-07-22 12:27:52.138816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.466 [2024-07-22 12:27:52.138832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.466 [2024-07-22 12:27:52.139047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.466 [2024-07-22 12:27:52.139262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.466 [2024-07-22 12:27:52.139298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.466 [2024-07-22 12:27:52.139311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.466 [2024-07-22 12:27:52.142468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.466 [2024-07-22 12:27:52.151573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.466 [2024-07-22 12:27:52.152044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.466 [2024-07-22 12:27:52.152073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.466 [2024-07-22 12:27:52.152088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.466 [2024-07-22 12:27:52.152329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.466 [2024-07-22 12:27:52.152542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.466 [2024-07-22 12:27:52.152561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.466 [2024-07-22 12:27:52.152573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.466 [2024-07-22 12:27:52.155549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.466 [2024-07-22 12:27:52.164820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.466 [2024-07-22 12:27:52.165272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.466 [2024-07-22 12:27:52.165300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.466 [2024-07-22 12:27:52.165321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.466 [2024-07-22 12:27:52.165561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.466 [2024-07-22 12:27:52.165808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.466 [2024-07-22 12:27:52.165829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.466 [2024-07-22 12:27:52.165842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.466 [2024-07-22 12:27:52.168809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.466 [2024-07-22 12:27:52.178077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.466 [2024-07-22 12:27:52.178414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.466 [2024-07-22 12:27:52.178440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.466 [2024-07-22 12:27:52.178456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.466 [2024-07-22 12:27:52.178682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.466 [2024-07-22 12:27:52.178901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.466 [2024-07-22 12:27:52.178921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.466 [2024-07-22 12:27:52.178947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.466 [2024-07-22 12:27:52.181898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.191369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.191779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.191808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.191824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.192065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.192292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.192311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.192324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.195351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.204567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.205037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.205065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.205081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.205320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.205521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.205540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.205553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.208541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.217818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.218151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.218190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.218205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.218432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.218654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.218674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.218687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.221648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.231038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.231394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.231421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.231436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.231667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.231870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.231889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.231902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.234863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.244292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.244725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.244753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.244769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.245011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.245225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.245243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.245255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.248229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.257717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.258113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.258143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.258162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.258399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.258651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.258674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.258690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.262238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.271672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.272063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.272094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.272112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.272348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.272588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.272611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.272637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.276189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.285646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.286076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.286106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.286124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.286361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.286601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.286634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.286650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.290216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.299678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.300102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.300132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.300155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.300393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.300644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.300675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.300690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.304243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.313494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.313900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.313931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.313948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.314185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.314426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.467 [2024-07-22 12:27:52.314448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.467 [2024-07-22 12:27:52.314463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.467 [2024-07-22 12:27:52.318035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.467 [2024-07-22 12:27:52.327503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.467 [2024-07-22 12:27:52.327914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.467 [2024-07-22 12:27:52.327946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.467 [2024-07-22 12:27:52.327964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.467 [2024-07-22 12:27:52.328202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.467 [2024-07-22 12:27:52.328443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.468 [2024-07-22 12:27:52.328466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.468 [2024-07-22 12:27:52.328481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.468 [2024-07-22 12:27:52.332047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.468 [2024-07-22 12:27:52.341489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.468 [2024-07-22 12:27:52.341893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.468 [2024-07-22 12:27:52.341924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.468 [2024-07-22 12:27:52.341942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.468 [2024-07-22 12:27:52.342179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.468 [2024-07-22 12:27:52.342419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.468 [2024-07-22 12:27:52.342448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.468 [2024-07-22 12:27:52.342464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.468 [2024-07-22 12:27:52.346026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.468 [2024-07-22 12:27:52.355457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.468 [2024-07-22 12:27:52.355884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.468 [2024-07-22 12:27:52.355915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.468 [2024-07-22 12:27:52.355933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.468 [2024-07-22 12:27:52.356170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.468 [2024-07-22 12:27:52.356410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.468 [2024-07-22 12:27:52.356432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.468 [2024-07-22 12:27:52.356447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.468 [2024-07-22 12:27:52.360010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.468 [2024-07-22 12:27:52.369448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.468 [2024-07-22 12:27:52.369859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.468 [2024-07-22 12:27:52.369889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.468 [2024-07-22 12:27:52.369907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.468 [2024-07-22 12:27:52.370145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.468 [2024-07-22 12:27:52.370385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.468 [2024-07-22 12:27:52.370408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.468 [2024-07-22 12:27:52.370423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.468 [2024-07-22 12:27:52.373990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.468 [2024-07-22 12:27:52.383427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.468 [2024-07-22 12:27:52.383827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.468 [2024-07-22 12:27:52.383858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.468 [2024-07-22 12:27:52.383876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.468 [2024-07-22 12:27:52.384113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.468 [2024-07-22 12:27:52.384353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.468 [2024-07-22 12:27:52.384376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.468 [2024-07-22 12:27:52.384391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.468 [2024-07-22 12:27:52.387954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.727 [2024-07-22 12:27:52.397390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.727 [2024-07-22 12:27:52.397829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.397860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.397877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.398114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.398354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.398377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.398392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.401954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.411386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.411815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.411845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.411863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.412099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.412340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.412362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.412378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.415944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.425377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.425776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.425808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.425826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.426063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.426303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.426326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.426341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.429904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.439336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.439767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.439798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.439816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.440059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.440299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.440321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.440336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.443899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.453341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.453737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.453768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.453785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.454022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.454263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.454286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.454301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.457878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.467311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.467728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.467759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.467777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.468013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.468253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.468275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.468290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.471851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.481304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.481732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.481763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.481781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.482019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.482259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.482282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.482303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.485868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.495305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.495682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.495713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.495731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.495969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.496209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.496232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.496247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.499812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.509255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.509682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.509713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.509731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.509968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.510208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.510231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.510246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.513810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.523280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.523711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.523742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.523760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.523997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.524237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.524260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.524275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.527841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.728 [2024-07-22 12:27:52.537282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.728 [2024-07-22 12:27:52.537656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.728 [2024-07-22 12:27:52.537693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.728 [2024-07-22 12:27:52.537712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.728 [2024-07-22 12:27:52.537949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.728 [2024-07-22 12:27:52.538190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.728 [2024-07-22 12:27:52.538212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.728 [2024-07-22 12:27:52.538228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.728 [2024-07-22 12:27:52.541793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.551231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.551637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.551669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.551687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.551925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.552166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.552188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.552204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.555763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.565203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.565597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.565635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.565655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.565893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.566133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.566156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.566171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.569731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.579165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.579587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.579625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.579644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.579882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.580129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.580152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.580167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.583880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.593108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.593499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.593529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.593546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.593795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.594037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.594059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.594074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.597640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.607102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.607525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.607556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.607574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.607820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.608062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.608084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.608099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.611680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.620933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.621336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.621367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.621384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.621633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.621877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.621900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.621915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.625476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.634932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.635363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.635394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.635412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.635659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.635900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.635923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.635938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.639492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.729 [2024-07-22 12:27:52.648956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.729 [2024-07-22 12:27:52.649351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.729 [2024-07-22 12:27:52.649382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.729 [2024-07-22 12:27:52.649400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.729 [2024-07-22 12:27:52.649649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.729 [2024-07-22 12:27:52.649891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.729 [2024-07-22 12:27:52.649915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.729 [2024-07-22 12:27:52.649930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.729 [2024-07-22 12:27:52.653486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.989 [2024-07-22 12:27:52.662968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.989 [2024-07-22 12:27:52.663341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.989 [2024-07-22 12:27:52.663371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.989 [2024-07-22 12:27:52.663389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.989 [2024-07-22 12:27:52.663636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.989 [2024-07-22 12:27:52.663877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.989 [2024-07-22 12:27:52.663900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.989 [2024-07-22 12:27:52.663915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.989 [2024-07-22 12:27:52.667467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.989 [2024-07-22 12:27:52.676940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.989 [2024-07-22 12:27:52.677334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.989 [2024-07-22 12:27:52.677364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.989 [2024-07-22 12:27:52.677388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.989 [2024-07-22 12:27:52.677639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.989 [2024-07-22 12:27:52.677881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.989 [2024-07-22 12:27:52.677903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.989 [2024-07-22 12:27:52.677918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.989 [2024-07-22 12:27:52.681485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.989 [2024-07-22 12:27:52.690940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.989 [2024-07-22 12:27:52.691355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.989 [2024-07-22 12:27:52.691386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.989 [2024-07-22 12:27:52.691403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.989 [2024-07-22 12:27:52.691650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.989 [2024-07-22 12:27:52.691892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.989 [2024-07-22 12:27:52.691914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.989 [2024-07-22 12:27:52.691929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.989 [2024-07-22 12:27:52.695484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.989 [2024-07-22 12:27:52.704948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.989 [2024-07-22 12:27:52.705317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.989 [2024-07-22 12:27:52.705348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.989 [2024-07-22 12:27:52.705365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.989 [2024-07-22 12:27:52.705602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.989 [2024-07-22 12:27:52.705853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.989 [2024-07-22 12:27:52.705876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.989 [2024-07-22 12:27:52.705891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.989 [2024-07-22 12:27:52.709452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.989 [2024-07-22 12:27:52.718908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.989 [2024-07-22 12:27:52.719335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.989 [2024-07-22 12:27:52.719366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.989 [2024-07-22 12:27:52.719383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.989 [2024-07-22 12:27:52.719630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.989 [2024-07-22 12:27:52.719872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.989 [2024-07-22 12:27:52.719900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.989 [2024-07-22 12:27:52.719916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.989 [2024-07-22 12:27:52.723469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.989 [2024-07-22 12:27:52.732919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.989 [2024-07-22 12:27:52.733349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.733380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.733398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.733645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.733885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.733908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.733923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.737477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.746922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.747321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.747352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.747370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.747607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.747860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.747882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.747897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.751451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.760906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.761319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.761350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.761368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.761605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.761856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.761879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.761894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1142197 Killed "${NVMF_APP[@]}" "$@" 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:44.990 [2024-07-22 12:27:52.765448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1143255 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1143255 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1143255 ']' 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.990 12:27:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:44.990 [2024-07-22 12:27:52.774896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.775309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.775340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.775358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.775595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.775844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.775868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.775883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.779446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.788924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.789341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.789371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.789390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.789638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.789887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.789911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.789925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.793478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.802429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.802820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.802849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.802865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.803116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.803320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.803341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.803354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.806462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.815692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.816104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.816132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.816147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.816383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.816575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.816608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.816636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.817367] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:44.990 [2024-07-22 12:27:52.817423] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.990 [2024-07-22 12:27:52.819567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.829182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.829530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.829558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.829574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.829837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.830065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.830084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.830097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.833189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 [2024-07-22 12:27:52.842484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.842976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.843003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.990 [2024-07-22 12:27:52.843019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.990 [2024-07-22 12:27:52.843266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.990 [2024-07-22 12:27:52.843478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.990 [2024-07-22 12:27:52.843498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.990 [2024-07-22 12:27:52.843511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.990 [2024-07-22 12:27:52.846503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.990 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.990 [2024-07-22 12:27:52.855700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.990 [2024-07-22 12:27:52.856096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.990 [2024-07-22 12:27:52.856124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.991 [2024-07-22 12:27:52.856141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.991 [2024-07-22 12:27:52.856387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.991 [2024-07-22 12:27:52.856640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.991 [2024-07-22 12:27:52.856678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.991 [2024-07-22 12:27:52.856692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.991 [2024-07-22 12:27:52.856920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:44.991 [2024-07-22 12:27:52.860205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.991 [2024-07-22 12:27:52.869622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.991 [2024-07-22 12:27:52.870030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.991 [2024-07-22 12:27:52.870061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.991 [2024-07-22 12:27:52.870079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.991 [2024-07-22 12:27:52.870318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.991 [2024-07-22 12:27:52.870558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.991 [2024-07-22 12:27:52.870581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.991 [2024-07-22 12:27:52.870597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.991 [2024-07-22 12:27:52.874142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.991 [2024-07-22 12:27:52.883415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.991 [2024-07-22 12:27:52.883869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.991 [2024-07-22 12:27:52.883912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.991 [2024-07-22 12:27:52.883933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.991 [2024-07-22 12:27:52.884184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.991 [2024-07-22 12:27:52.884427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.991 [2024-07-22 12:27:52.884450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.991 [2024-07-22 12:27:52.884466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.991 [2024-07-22 12:27:52.886915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:44.991 [2024-07-22 12:27:52.887923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.991 [2024-07-22 12:27:52.897254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.991 [2024-07-22 12:27:52.897896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.991 [2024-07-22 12:27:52.897950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.991 [2024-07-22 12:27:52.897973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.991 [2024-07-22 12:27:52.898223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.991 [2024-07-22 12:27:52.898471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.991 [2024-07-22 12:27:52.898495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.991 [2024-07-22 12:27:52.898515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.991 [2024-07-22 12:27:52.902009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:44.991 [2024-07-22 12:27:52.911076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.991 [2024-07-22 12:27:52.911546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.991 [2024-07-22 12:27:52.911581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:44.991 [2024-07-22 12:27:52.911601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:44.991 [2024-07-22 12:27:52.911883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:44.991 [2024-07-22 12:27:52.912134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:44.991 [2024-07-22 12:27:52.912158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:44.991 [2024-07-22 12:27:52.912176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:44.991 [2024-07-22 12:27:52.915648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:52.924947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:52.925398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:52.925428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:52.925445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:52.925720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:52.925949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:52.925974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:52.925991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:52.929496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:52.938772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:52.939285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:52.939323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:52.939345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:52.939590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:52.939825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:52.939846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:52.939862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:52.943393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:52.952703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:52.953167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:52.953205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:52.953226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:52.953471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:52.953731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:52.953752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:52.953768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:52.957251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:52.966505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:52.966922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:52.966971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:52.966990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:52.967231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:52.967472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:52.967497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:52.967514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:52.971006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:52.979723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.252 [2024-07-22 12:27:52.979755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.252 [2024-07-22 12:27:52.979770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.252 [2024-07-22 12:27:52.979782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.252 [2024-07-22 12:27:52.979791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.252 [2024-07-22 12:27:52.979842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.252 [2024-07-22 12:27:52.979900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.252 [2024-07-22 12:27:52.979903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.252 [2024-07-22 12:27:52.980069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:52.980551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:52.980580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:52.980598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:52.980834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:52.981058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:52.981078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:52.981093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:52.984255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:52.993548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:52.994173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:52.994212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:52.994234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:52.994486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:52.994723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:52.994746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:52.994764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:52.997893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:53.007266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:53.007841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:53.007879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:53.007901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:53.008155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:53.008377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:53.008399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:53.008416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:53.011567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:53.020849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:53.021353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:53.021392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:53.021413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:53.021669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:53.021887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:53.021909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:53.021941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:53.025067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:53.034303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:53.034850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:53.034886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:53.034905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:53.035155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:53.035365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:53.035386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:53.035402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:53.038558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:53.047868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:53.048397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:53.048436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.252 [2024-07-22 12:27:53.048457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.252 [2024-07-22 12:27:53.048693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.252 [2024-07-22 12:27:53.048918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.252 [2024-07-22 12:27:53.048940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.252 [2024-07-22 12:27:53.048958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.252 [2024-07-22 12:27:53.052257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.252 [2024-07-22 12:27:53.061433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.252 [2024-07-22 12:27:53.061861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.252 [2024-07-22 12:27:53.061895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.061913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.062148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.062372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.062394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.062410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.065533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 [2024-07-22 12:27:53.074996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.075396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.075425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.075442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.075665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.075883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.075906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.075936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.079157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 [2024-07-22 12:27:53.088479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.088866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.088895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.088912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.089126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.089351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.089373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.089387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.253 [2024-07-22 12:27:53.092639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 [2024-07-22 12:27:53.102139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.102586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.102623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.102642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.102857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.103078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.103098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.103112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.106311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.253 [2024-07-22 12:27:53.115649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.253 [2024-07-22 12:27:53.116077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.116108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.116124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.253 [2024-07-22 12:27:53.116357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.116579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.116629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.116651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.119879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 [2024-07-22 12:27:53.121273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.253 [2024-07-22 12:27:53.129225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.129676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.129706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.129722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.129955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.130170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.130191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.130204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.133323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.253 [2024-07-22 12:27:53.142923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.143276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.143304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.143320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.143541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.143801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.143824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.143838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.147127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 [2024-07-22 12:27:53.156399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.156963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.157004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.157026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.157281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.157494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.157516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.157534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 [2024-07-22 12:27:53.160713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 Malloc0 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.253 [2024-07-22 12:27:53.169998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.253 [2024-07-22 12:27:53.170433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.253 [2024-07-22 12:27:53.170462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.253 [2024-07-22 12:27:53.170480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.253 [2024-07-22 12:27:53.170704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.253 [2024-07-22 12:27:53.170938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.253 [2024-07-22 12:27:53.170982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.253 [2024-07-22 12:27:53.170997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.253 [2024-07-22 12:27:53.174341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.253 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.513 [2024-07-22 12:27:53.183668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.513 [2024-07-22 12:27:53.184105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.513 [2024-07-22 12:27:53.184134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3b50 with addr=10.0.0.2, port=4420 00:32:45.513 [2024-07-22 12:27:53.184151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3b50 is same with the state(5) to be set 00:32:45.513 [2024-07-22 12:27:53.184379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3b50 (9): Bad file descriptor 00:32:45.513 [2024-07-22 12:27:53.184435] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.513 [2024-07-22 12:27:53.184643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.513 [2024-07-22 12:27:53.184665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.513 [2024-07-22 12:27:53.184679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.513 [2024-07-22 12:27:53.188022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.513 12:27:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.513 12:27:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1142481 00:32:45.513 [2024-07-22 12:27:53.197146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.513 [2024-07-22 12:27:53.273528] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:55.569 00:32:55.569 Latency(us) 00:32:55.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.569 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:55.569 Verification LBA range: start 0x0 length 0x4000 00:32:55.569 Nvme1n1 : 15.01 6664.97 26.04 9018.61 0.00 8137.42 837.40 20777.34 00:32:55.569 =================================================================================================================== 00:32:55.569 Total : 6664.97 26.04 9018.61 0.00 8137.42 837.40 20777.34 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:55.569 rmmod nvme_tcp 00:32:55.569 rmmod nvme_fabrics 00:32:55.569 rmmod nvme_keyring 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1143255 ']' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1143255 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1143255 ']' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1143255 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143255 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143255' 00:32:55.569 killing process with pid 1143255 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1143255 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1143255 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:55.569 12:28:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.945 12:28:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:56.945 00:32:56.945 real 0m22.582s 00:32:56.945 user 1m0.121s 00:32:56.945 sys 0m4.415s 00:32:56.945 12:28:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:56.945 12:28:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:56.945 ************************************ 00:32:56.945 END TEST nvmf_bdevperf 00:32:56.945 ************************************ 00:32:56.945 12:28:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:56.945 12:28:04 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:56.945 12:28:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:56.945 12:28:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.946 12:28:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:56.946 ************************************ 00:32:56.946 START TEST nvmf_target_disconnect 00:32:56.946 ************************************ 00:32:56.946 12:28:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:57.245 * Looking for test storage... 00:32:57.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:32:57.245 12:28:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:59.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:59.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.147 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:59.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:59.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.148 12:28:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:59.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:32:59.148 00:32:59.148 --- 10.0.0.2 ping statistics --- 00:32:59.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.148 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:32:59.148 00:32:59.148 --- 10.0.0.1 ping statistics --- 00:32:59.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.148 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:59.148 12:28:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:59.406 ************************************ 00:32:59.406 START TEST nvmf_target_disconnect_tc1 00:32:59.406 ************************************ 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:59.406 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.406 [2024-07-22 12:28:07.185461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.406 [2024-07-22 12:28:07.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d053e0 with addr=10.0.0.2, port=4420 00:32:59.406 [2024-07-22 12:28:07.185580] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:59.406 [2024-07-22 12:28:07.185604] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:59.406 [2024-07-22 12:28:07.185630] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:59.406 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:59.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:59.406 Initializing NVMe Controllers 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:59.406 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:59.406 00:32:59.407 real 0m0.093s 00:32:59.407 user 0m0.045s 00:32:59.407 sys 0m0.047s 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:59.407 ************************************ 00:32:59.407 END TEST nvmf_target_disconnect_tc1 00:32:59.407 ************************************ 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:59.407 ************************************ 00:32:59.407 START TEST nvmf_target_disconnect_tc2 00:32:59.407 ************************************ 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1146293 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1146293 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1146293 ']' 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:59.407 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.407 [2024-07-22 12:28:07.291462] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:32:59.407 [2024-07-22 12:28:07.291535] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.407 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.407 [2024-07-22 12:28:07.328537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:59.665 [2024-07-22 12:28:07.355775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:59.665 [2024-07-22 12:28:07.443959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.665 [2024-07-22 12:28:07.444014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.665 [2024-07-22 12:28:07.444038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.665 [2024-07-22 12:28:07.444049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.665 [2024-07-22 12:28:07.444059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.665 [2024-07-22 12:28:07.444141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:59.665 [2024-07-22 12:28:07.444204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:59.665 [2024-07-22 12:28:07.444269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:59.665 [2024-07-22 12:28:07.444271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.665 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.931 Malloc0 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.931 [2024-07-22 12:28:07.613117] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.931 [2024-07-22 12:28:07.641340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1146440 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:59.931 12:28:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:59.931 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.845 12:28:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1146293 00:33:01.845 12:28:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Write completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.845 Read completed with error (sct=0, sc=8) 00:33:01.845 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 [2024-07-22 12:28:09.666150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 [2024-07-22 12:28:09.666486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 [2024-07-22 12:28:09.666845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Write completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 Read completed with error (sct=0, sc=8) 00:33:01.846 starting I/O failed 00:33:01.846 [2024-07-22 12:28:09.667188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:01.846 [2024-07-22 12:28:09.667430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.846 [2024-07-22 12:28:09.667491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.846 qpair failed and we were unable to recover it. 00:33:01.846 [2024-07-22 12:28:09.667693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.846 [2024-07-22 12:28:09.667721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.846 qpair failed and we were unable to recover it. 00:33:01.846 [2024-07-22 12:28:09.667855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.846 [2024-07-22 12:28:09.667881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.846 qpair failed and we were unable to recover it. 00:33:01.846 [2024-07-22 12:28:09.668040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.846 [2024-07-22 12:28:09.668085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.846 qpair failed and we were unable to recover it. 00:33:01.846 [2024-07-22 12:28:09.668252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.846 [2024-07-22 12:28:09.668280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.846 qpair failed and we were unable to recover it. 00:33:01.846 [2024-07-22 12:28:09.668480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.846 [2024-07-22 12:28:09.668514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.668698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.668727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.668856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.668884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.669037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.669064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.669248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.669274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.669438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.669467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.669632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.669659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.669779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.669807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.669969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.670001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.670120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.670157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.670406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.670433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.670587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.670627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.670787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.670813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.670970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.670997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.671167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.671193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.671408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.671435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.671558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.671617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.671778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.671807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.671931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.671957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.672108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.672134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.672421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.672472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.672677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.672704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.672857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.672884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.673068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.673098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.673289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.673319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.673540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.673569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.673713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.673740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.673863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.673889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.674108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.674134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.674313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.674340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.674527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.674553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.674726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.674753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.674896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.674934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.675043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.675084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.675290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.675316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.675463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.675490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.675607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.675642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.675764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.675790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.675918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.675949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.676105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.676131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.676279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.847 [2024-07-22 12:28:09.676307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.847 qpair failed and we were unable to recover it. 00:33:01.847 [2024-07-22 12:28:09.676451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.676477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.676600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.676641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.676800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.676827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.676942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.676969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.677140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.677187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.677428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.677466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.677617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.677644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.677792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.677819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.677941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.677967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.678247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.678298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.679137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.679179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.679366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.679397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.679543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.679571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.679713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.679758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.679919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.679949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.680116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.680143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.680280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.680308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.680432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.680458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.680618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.680645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.680773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.680799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.680956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.681007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.681162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.681189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.681344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.681372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.681491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.681517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.681673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.681701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.681843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.681870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.682023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.682051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.682225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.682269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.682433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.682461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.682606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.682638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.682778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.682804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.682964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.682990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.683131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.683162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.683342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.683368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.683545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.683571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.683728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.683755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.683882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.683935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.684088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.684120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.684297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.684324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.684474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.684500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.848 [2024-07-22 12:28:09.684653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.848 [2024-07-22 12:28:09.684680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.848 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.684802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.684830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.684974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.685000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.685146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.685174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.685334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.685372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.685530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.685557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.685690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.685717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.685833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.685859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.686019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.686045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.686166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.686192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.686357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.686385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.686563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.686592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.686727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.686753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.686906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.686934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.687107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.687133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.687276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.687303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.687511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.687545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.687714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.687744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.687917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.687944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.688096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.688122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.688277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.688303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.688445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.688470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.688623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.688650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.688772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.688799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.689017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.689054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.689188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.689215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.689357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.689384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.689511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.689538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.689748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.689774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.689890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.689925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.690047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.690072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.690215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.690240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.690362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.690388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.690503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.690529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.690676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.690704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.690850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.690876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.691046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.691072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.691222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.691253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.691397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.691422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.691561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.691587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.691740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.691766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.849 qpair failed and we were unable to recover it. 00:33:01.849 [2024-07-22 12:28:09.691883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.849 [2024-07-22 12:28:09.691919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.692088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.692115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.692256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.692282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.692405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.692431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.692587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.692619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.692776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.692802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.692946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.692972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.693091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.693116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.693269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.693295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.693444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.693470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.693661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.693689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.693812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.693838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.693991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.694017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.694163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.694189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.694308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.694334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.694475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.694501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.694674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.694700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.694844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.694869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.695028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.695054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.695211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.695237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.695454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.695480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.695629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.695656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.695772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.695799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.695961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.695987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.696143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.696169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.696342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.696368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.696525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.696565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.696765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.696807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.697049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.697075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.697284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.697310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.697456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.697485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.697630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.697672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.697835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.697861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.850 [2024-07-22 12:28:09.697985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.850 [2024-07-22 12:28:09.698012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.850 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.698160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.698186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.698331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.698358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.698554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.698588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.698767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.698796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.698989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.699015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.699200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.699225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.699389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.699415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.699585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.699626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.699800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.699829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.700028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.700064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.700208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.700234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.700348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.700374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.700503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.700528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.700664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.700691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.700863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.700888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.701056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.701082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.701209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.701235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.701412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.701448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.701593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.701636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.701808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.701834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.701977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.702003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.702133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.702158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.702293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.702319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.702485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.702510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.702724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.702751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.702871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.702898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.703052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.703078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.703227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.703254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.703479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.703506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.703665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.703693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.703870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.703896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.704051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.704077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.704301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.704327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.704444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.704471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.704652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.704680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.704848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.704876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.705073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.705099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.705237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.705263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.705409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.705435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.705619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.851 [2024-07-22 12:28:09.705645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.851 qpair failed and we were unable to recover it. 00:33:01.851 [2024-07-22 12:28:09.705776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.705802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.705947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.705973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.706118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.706159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.706339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.706365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.706490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.706516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.706670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.706697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.706847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.706873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.707061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.707087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.707240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.707266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.707445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.707472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.707587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.707631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.707781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.707808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.707949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.707975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.708146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.708196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.708359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.708388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.708529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.708556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.708686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.708713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.708882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.708935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.709111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.709137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.709318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.709344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.709490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.709519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.709661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.709688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.709831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.709858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.710012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.710046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.710257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.710287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.710455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.710482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.710624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.710651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.711671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.711702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.711880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.711925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.712111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.712138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.712286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.712312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.712511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.712541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.712694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.712720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.712863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.712888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.713063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.713094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.713209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.713235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.713411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.713437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.713621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.713648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.713802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.713827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.714023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.714049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.852 [2024-07-22 12:28:09.714159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.852 [2024-07-22 12:28:09.714185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.852 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.714361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.714403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.714570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.714601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.714758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.714786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.714933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.714960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.715103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.715132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.715296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.715325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.715491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.715520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.715691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.715718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.715841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.715867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.716024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.716051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.716179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.716204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.716370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.716396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.716556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.716582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.716715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.716742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.716865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.716890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.717081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.717108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.717273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.717302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.717465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.717492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.717655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.717682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.717801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.717827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.717986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.718012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.718178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.718207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.718375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.718404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.718563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.718592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.718737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.718763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.718882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.718929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.719057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.719087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.719271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.719299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.719435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.719464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.719638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.719666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.719779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.719804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.719914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.719950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.720121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.720146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.720355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.720383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.720522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.720548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.720686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.720714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.720841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.720866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.720986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.721011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.721206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.721235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.721365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.721393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.853 qpair failed and we were unable to recover it. 00:33:01.853 [2024-07-22 12:28:09.721552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.853 [2024-07-22 12:28:09.721580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.721789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.721820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.721963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.721992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.722166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.722192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.722332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.722502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.722529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.722691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.722732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.722851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.722878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.723081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.723111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.723286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.723333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.723523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.723549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.723672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.723699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.723825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.723852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.724010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.724037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.724180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.724206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.724327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.724354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.724497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.724528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.724702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.724742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.724877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.724905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.725070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.725097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.725255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.725299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.725478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.725504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.725625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.725652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.725789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.725815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.725979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.726022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.726186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.726230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.726399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.726425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.726544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.726572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.726697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.726724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.726845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.726871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.727036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.727065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.727254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.727302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.727450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.727475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.727597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.727632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.727753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.727780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.854 [2024-07-22 12:28:09.727921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.854 [2024-07-22 12:28:09.727949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.854 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.728159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.728206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.728421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.728468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.728622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.728650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.728766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.728791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.728916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.728941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.729088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.729122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.729299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.729329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.729468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.729494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.729650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.729677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.729804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.729830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.729953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.729980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.730126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.730169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.730319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.730345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.730492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.730520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.730697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.730849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.730875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.730984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.731010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.731127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.731153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.731334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.731393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.731544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.731571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.731703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.731731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.731894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.731932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.732098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.732142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.732343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.732388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.732534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.732563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.732707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.732733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.732856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.732883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.733093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.733122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.733252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.733282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.733470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.733500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.733679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.733707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.733847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.733876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.734075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.734107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.734306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.734336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.734484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.734513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.734701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.734729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.734889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.734930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.735106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.735135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.735333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.735361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.735508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.735534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.855 qpair failed and we were unable to recover it. 00:33:01.855 [2024-07-22 12:28:09.735686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.855 [2024-07-22 12:28:09.735713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.735833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.735859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.736042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.736071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.736273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.736321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.736478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.736508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.736692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.736723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.736847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.736875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.737084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.737110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.737247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.737276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.737453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.737479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.737665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.737692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.737831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.737857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.738012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.738040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.738207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.738236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.738394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.738424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.738608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.738655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.738790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.738816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.738944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.738988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.739192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.739239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.739448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.739478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.739622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.739668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.739794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.739819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.739972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.739998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.740164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.740193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.740320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.740348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.740506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.740535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.740689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.740715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.740828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.740854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.740978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.741006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.741152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.741178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.741347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.741375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.741513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.741542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.741715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.741742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.741858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.741884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.742066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.742122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.742264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.742311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.742449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.742493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.742666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.742704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.742854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.742883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.743037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.743080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.743239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.856 [2024-07-22 12:28:09.743279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.856 qpair failed and we were unable to recover it. 00:33:01.856 [2024-07-22 12:28:09.743434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.743464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.743628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.743658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.743798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.743824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.743957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.743983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.744142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.744172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.744308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.744339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.744475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.744504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.744687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.744715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.744859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.744888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.745053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.745082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.745246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.745277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.745439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.745469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.745641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.745684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.745811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.745836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.746009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.746057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.746208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.746236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.746360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.746390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.746551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.746577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.746717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.746743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.746883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.746936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.747085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.747132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.747260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.747290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.747481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.747510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.747669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.747695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.747818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.747844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.748022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.748052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.748210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.748240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.748413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.748461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.748624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.748651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.748772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.748799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.748911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.748938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.749127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.749172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.749334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.749364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.749520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.749547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.749683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.749711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.749833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.749877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.750035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.750064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.750191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.750221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.750377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.750406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.750566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.750596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.857 [2024-07-22 12:28:09.750759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.857 [2024-07-22 12:28:09.750787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.857 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.750902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.750930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.751096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.751126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.751285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.751314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.751476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.751513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.751654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.751682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.751825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.751851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.752068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.752101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.752306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.752334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.752517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.752546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.752693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.752720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.752839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.752865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.753089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.753136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.753283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.753329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.753502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.753531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.753676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.753703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.753823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.753849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.754043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.754071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.754286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.754336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.754595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.754632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.754777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.754803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.754977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.755006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.755146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.755172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.755286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.755314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.755464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.755497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.755662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.755689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.755809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.755836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.756069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.756116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.756285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.756311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.756455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.756501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.756665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.756696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.756837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.756863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.757010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.757037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.757184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.757227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.757366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.757393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.858 [2024-07-22 12:28:09.757544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.858 [2024-07-22 12:28:09.757571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.858 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.757759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.757790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.757927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.757953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.758073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.758099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.758276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.758304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.758440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.758466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.758610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.758662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.758818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.758847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.759020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.759047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.759209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.759238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.759378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.759409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.759581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.759609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.759771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.759800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.759925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.759955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.760150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.760177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.760342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.760373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.760507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.760540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.760688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.760715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.760832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.760859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.761004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.761032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.761191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.761217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.761325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.761351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.761514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.761568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.761736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.761765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.761884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.761911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.762093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.762120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.762263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.762290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.762452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.762482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.762657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.762687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.762828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.762855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.763003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.763029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.763148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.763175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.763316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.763353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.763521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.763552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.763735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.763764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.763894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.763928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.764041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.764072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.764210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.764239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.764437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.764464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.764602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.764641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.764772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.859 [2024-07-22 12:28:09.764801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.859 qpair failed and we were unable to recover it. 00:33:01.859 [2024-07-22 12:28:09.764954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.764982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.765125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.765152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.765329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.765359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.765491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.765517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.765673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.765712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.765861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.765890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.766047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.766074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.766189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.766216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.766377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.766407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.766574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.766602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.766740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.766786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.766917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.766948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.767146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.767173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.767348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.767376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.767503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.767531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:01.860 [2024-07-22 12:28:09.767677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.860 [2024-07-22 12:28:09.767703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:01.860 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.767881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.767921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.768139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.768186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.768329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.768357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.768502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.768529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.768687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.768717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.768856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.768882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.769063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.769091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.769240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.769270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.769414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.769441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.769583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.769610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.769749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.769776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.769897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.769924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.770066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.770093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.770240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.770276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.770393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.770420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.770546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.770575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.770738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.770767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.770893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.770922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.771057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.771084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.771201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.771233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.771401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.771428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.771545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.771571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.771707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.771734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.771891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.771917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.772061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.772088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.772208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.772235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.772441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.772468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.772589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.772623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.772750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.772777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.772896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.772922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.773082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.773124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.773252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.773282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.773446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.773472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.773627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.773667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.773794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.773821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.773943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.773970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.774122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.774149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.774313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.774342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.774573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.774602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.141 [2024-07-22 12:28:09.774746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.141 [2024-07-22 12:28:09.774773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.141 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.774940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.774969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.775153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.775180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.775324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.775350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.775500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.775526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.775674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.775701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.775819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.775863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.776035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.776062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.776174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.776200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.776348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.776392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.776552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.776581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.776725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.776752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.776864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.776891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.777044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.777070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.777282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.777308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.777440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.777478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.777663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.777693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.777833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.777859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.777980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.778006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.778156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.778183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.778356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.778386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.778562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.778592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.778753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.778798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.778943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.778971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.779094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.779122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.779269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.779296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.779439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.779469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.779639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.779685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.779813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.779840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.779964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.779991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.780167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.780212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.780369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.780401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.780562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.780590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.780733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.780760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.780886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.780933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.781090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.781116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.781274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.781309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.781460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.781489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.781656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.781683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.781804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.781831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.782047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.782095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.142 qpair failed and we were unable to recover it. 00:33:02.142 [2024-07-22 12:28:09.782266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.142 [2024-07-22 12:28:09.782294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.782460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.782490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.782695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.782722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.782839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.782866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.782993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.783020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.783165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.783192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.783329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.783359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.783477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.783505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.783690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.783721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.783892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.783918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.784081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.784110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.784276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.784303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.784471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.784507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.784626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.784664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.784812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.784838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.785001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.785027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.785217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.785246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.785405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.785454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.785694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.785722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.785847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.785879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.786101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.786128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.786267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.786293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.786443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.786470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.786617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.786644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.786792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.786819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.786959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.786990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.787153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.787182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.787320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.787347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.787497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.787524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.787692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.787722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.787917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.787944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.788115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.788143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.788347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.788374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.788493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.788519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.788676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.788721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.788842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.788872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.789017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.789044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.789197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.789224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.789398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.789427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.789578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.789605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.789754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.143 [2024-07-22 12:28:09.789781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.143 qpair failed and we were unable to recover it. 00:33:02.143 [2024-07-22 12:28:09.789908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.789938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.790103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.790129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.790301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.790344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.790499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.790739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.790766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.790916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.790943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.791090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.791133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.791269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.791297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.791442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.791486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.791657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.791687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.791828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.791855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.792003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.792030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.792150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.792178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.792349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.792376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.793473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.793518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.793732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.793761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.793882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.793910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.794080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.794135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.794321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.794356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.794526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.794554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.794714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.794742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.794905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.794935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.795102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.795130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.795297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.795328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.795494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.795523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.795666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.795694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.795822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.795848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.796050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.796080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.796272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.796299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.796411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.796449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.796624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.796668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.796864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.796891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.797032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.797063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.797224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.797253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.797445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.797480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.797634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.797664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.797826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.797856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.798000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.798027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.798204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.798231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.798378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.798408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.144 [2024-07-22 12:28:09.798574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.144 [2024-07-22 12:28:09.798602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.144 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.798766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.798794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.798937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.798980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.799172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.799199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.799357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.799387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.799547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.799577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.799733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.799908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.799951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.800075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.800104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.800300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.800326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.800513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.800549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.800689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.800717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.800892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.800919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.801113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.801142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.801285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.801312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.801458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.801486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.801638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.801675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.801822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.801867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.802035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.802066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.802263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.802292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.802485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.802515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.802660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.802687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.802833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.802875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.803035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.803065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.803234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.803262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.803427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.803457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.803619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.803662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.803833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.803861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.803983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.804010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.804190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.804234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.804393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.145 [2024-07-22 12:28:09.804419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.145 qpair failed and we were unable to recover it. 00:33:02.145 [2024-07-22 12:28:09.804528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.804555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.804713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.804740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.804900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.804927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.805098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.805125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.805296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.805325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.805522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.805549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.805718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.805748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.805903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.805934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.806097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.806125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.806283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.806313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.806463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.806493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.806692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.806720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.806903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.806942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.807101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.807131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.807320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.807488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.807515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.807668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.807713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.807870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.807899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.808044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.808071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.808185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.808212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.808384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.808411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.808525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.808552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.808689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.808717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.808889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.808916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.809031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.809059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.809210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.809237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.809422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.809450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.809640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.809686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.809841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.809870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.810039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.810066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.810227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.810257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.810398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.810425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.810576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.810603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.810768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.810795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.810942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.810986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.811135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.811162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.811336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.811363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.811528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.811558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.811763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.811790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.811950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.811980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.812140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.146 [2024-07-22 12:28:09.812170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.146 qpair failed and we were unable to recover it. 00:33:02.146 [2024-07-22 12:28:09.812366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.812392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.812587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.812622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.812788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.812819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.812976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.813003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.813147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.813174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.813344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.813372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.813543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.813570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.813754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.813784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.813938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.813968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.814140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.814166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.814336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.814363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.814534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.814564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.814761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.814788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.814947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.814976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.815102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.815135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.815300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.815326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.815474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.815501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.815669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.815699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.815865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.815901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.816090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.816120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.816286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.816313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.816463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.816490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.816633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.816679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.816873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.816912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.817059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.817085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.817237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.817265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.817448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.817482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.817671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.817698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.817847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.817872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.818022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.818049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.818197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.818224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.818389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.818419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.818578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.818607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.818784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.818810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.819007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.819037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.819221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.819252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.819418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.819641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.819679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.819804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.819833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.820000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.820028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.147 [2024-07-22 12:28:09.820149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.147 [2024-07-22 12:28:09.820176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.147 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.820308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.820337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.820509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.820537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.820702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.820733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.820927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.820954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.821096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.821122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.821284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.821316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.821478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.821508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.821678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.821706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.821853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.821879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.822070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.822098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.822272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.822300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.822446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.822491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.822679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.822710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.822877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.822909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.823100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.823130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.823285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.823314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.823531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.823561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.823747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.823774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.823900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.823926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.824076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.824103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.824248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.824275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.824414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.824440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.824550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.824581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.824761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.824805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.824955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.824984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.825170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.825201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.825366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.825397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.825584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.825621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.825772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.825799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.825950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.825978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.826174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.826204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.826397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.826424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.826532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.826576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.826752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.826782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.826932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.826960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.827148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.827177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.827311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.827341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.827515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.827543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.827671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.827717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.827907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.827937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.148 [2024-07-22 12:28:09.828090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.148 [2024-07-22 12:28:09.828117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.148 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.828303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.828333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.828492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.828533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.828732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.828758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.828887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.828913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.829081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.829124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.829290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.829318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.829480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.829509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.829695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.829725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.829865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.829902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.830103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.830133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.830289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.830318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.830545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.830575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.830778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.830804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.830978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.831007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.831178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.831206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.831396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.831426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.831586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.831622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.831795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.831822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.831975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.832001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.832151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.832193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.832361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.832388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.832576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.832605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.149 qpair failed and we were unable to recover it. 00:33:02.149 [2024-07-22 12:28:09.832794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.149 [2024-07-22 12:28:09.832821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.832983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.833011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.833199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.833233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.833385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.833415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.833557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.833583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.833776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.833806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.833953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.833980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.834105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.834132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.834280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.834307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.834452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.834497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.834686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.834714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.834881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.834917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.835069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.835098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.835255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.835282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.835433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.835459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.835581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.835811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.835841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.836019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.836046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.836199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.836243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.836403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.836432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.836593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.836631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.836844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.836880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.837026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.837056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.837204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.837234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.837367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.837397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.837565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.837591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.837737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.837767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.837948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.837976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.838124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.838153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.838276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.838303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.838453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.838481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.838638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.150 [2024-07-22 12:28:09.838669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.150 qpair failed and we were unable to recover it. 00:33:02.150 [2024-07-22 12:28:09.838858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.838884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.839072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.839098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.839221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.839249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.839425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.839452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.839598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.839637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.839806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.839833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.840018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.840047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.840232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.840260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.840421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.840450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.840627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.840663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.840818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.840847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.840981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.841025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.841209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.841239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.841431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.841458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.841648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.841678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.841807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.841836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.842018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.842045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.842217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.842245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.842380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.842410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.842570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.842601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.842758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.842785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.842932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.842957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.843081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.843125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.843319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.843355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.843547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.843576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.843730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.843757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.843931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.843974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.844173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.844199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.844320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.844346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.844544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.844573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.844723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.844750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.844876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.844918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.845107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.845137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.845330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.845357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.845505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.845531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.845673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.845715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.845889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.845916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.846065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.846093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.846288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.846318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.846460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.846487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.846653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.151 [2024-07-22 12:28:09.846697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.151 qpair failed and we were unable to recover it. 00:33:02.151 [2024-07-22 12:28:09.846870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.846897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.847045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.847072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.847227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.847254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.847423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.847467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.847657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.847685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.847859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.847889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.848040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.848069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.848206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.848235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.848419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.848446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.848636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.848670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.848835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.848864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.849054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.849081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.849191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.849218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.849356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.849383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.849556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.849588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.849733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.849764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.849907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.849934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.850085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.850132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.850279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.850309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.850493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.850522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.850689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.850717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.850832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.850876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.851063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.851093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.851225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.851255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.851480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.851510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.851717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.851745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.851863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.851890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.852009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.852036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.852190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.852217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.852358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.852385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.852531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.852573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.852737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.852768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.852934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.852961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.853076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.853104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.853284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.853314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.853484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.853511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.853688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.853716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.853881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.853911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.854101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.854128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.854312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.854341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.152 [2024-07-22 12:28:09.854474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.152 [2024-07-22 12:28:09.854501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.152 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.854673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.854717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.854913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.854941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.855104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.855135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.855327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.855354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.855521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.855551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.855713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.855743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.855901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.855930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.856124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.856150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.856294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.856320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.856508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.856537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.856713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.856741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.856886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.856911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.857075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.857108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.857256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.857284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.857439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.857470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.857635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.857670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.857814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.857840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.858004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.858034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.858207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.858234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.858406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.858432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.858631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.858661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.858815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.858844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.859035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.859065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.859213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.859238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.859407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.859433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.859634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.859679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.859835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.859864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.860007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.860034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.860175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.860217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.860375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.860404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.860568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.860597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.860770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.860797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.860921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.860947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.861118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.861160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.861322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.861351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.861518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.861548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.861668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.861695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.861812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.861840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.861962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.861989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.862159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.862185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.153 [2024-07-22 12:28:09.862376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.153 [2024-07-22 12:28:09.862406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.153 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.862582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.862609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.862796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.862827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.863018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.863045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.863241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.863290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.863471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.863500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.863658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.863688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.863832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.863859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.864058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.864087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.864277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.864305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.864468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.864498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.864711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.864739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.864906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.864936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.865089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.865117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.865245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.865273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.865444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.865471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.865659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.865689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.865855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.865883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.866045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.866074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.866265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.866292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.866461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.866491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.866643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.866674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.866839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.866868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.867060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.867086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.867281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.867310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.867498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.867528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.867670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.867698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.867845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.867872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.868106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.868167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.868346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.868373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.868561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.868590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.868736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.868761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.868909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.868935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.869058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.869085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.869228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.869254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.154 [2024-07-22 12:28:09.869400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.154 [2024-07-22 12:28:09.869433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.154 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.869598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.869634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.869767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.869796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.869931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.869959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.870145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.870172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.870333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.870363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.870519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.870547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.870708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.870737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.870899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.870925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.871050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.871078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.871230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.871256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.871445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.871474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.871645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.871673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.871864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.871894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.872067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.872095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.872236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.872263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.872400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.872427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.872554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.872596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.872729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.872759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.872948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.872975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.873093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.873119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.873239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.873265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.873434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.873464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.873634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.873662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.873830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.873856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.874083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.874142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.874333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.874359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.874553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.874583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.874754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.874781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.874930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.874972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.875135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.875163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.875335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.875360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.875551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.875580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.875749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.875776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.875915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.875945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.876126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.876155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.876348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.876374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.876540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.876569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.876733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.876763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.876920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.876948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.877141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.877171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.155 qpair failed and we were unable to recover it. 00:33:02.155 [2024-07-22 12:28:09.877348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.155 [2024-07-22 12:28:09.877396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.877564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.877594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.877791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.877820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.877986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.878139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.878322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.878518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.878670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.878819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.878959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.878986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.879157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.879199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.879361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.879388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.879535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.879561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.879753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.879782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.879940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.879970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.880114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.880141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.880309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.880352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.880502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.880530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.880686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.880854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.880880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.881022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.881064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.881225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.881253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.881403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.881432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.881636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.881662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.881809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.881836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.881980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.882022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.882215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.882244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.882385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.882413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.882559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.882601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.882801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.882830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.883013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.883043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.883184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.883211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.883399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.883428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.883553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.883582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.883776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.883806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.883969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.883996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.884139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.884165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.884340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.884370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.884531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.884561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.884763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.884794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.884934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.884964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.156 qpair failed and we were unable to recover it. 00:33:02.156 [2024-07-22 12:28:09.885095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.156 [2024-07-22 12:28:09.885123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.885283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.885313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.885504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.885533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.885678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.885711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.885881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.885923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.886109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.886138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.886306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.886333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.886520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.886550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.886721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.886750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.886910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.886939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.887132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.887159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.887284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.887312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.887458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.887500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.887662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.887692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.887872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.887899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.888040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.888068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.888216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.888259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.888451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.888477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.888659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.888687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.888836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.888863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.889001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.889042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.889219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.889248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.889418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.889445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.889631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.889667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.889828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.889858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.890046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.890075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.890223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.890250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.890432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.890476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.890636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.890833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.890874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.891063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.891089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.891264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.891316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.891463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.891490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.891630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.891668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.891880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.891906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.892047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.892110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.892248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.892277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.892475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.892503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.892674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.892704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.892829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.892856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.157 [2024-07-22 12:28:09.893004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.157 [2024-07-22 12:28:09.893031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.157 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.893174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.893200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.893316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.893344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.893463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.893677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.893705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.893882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.893926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.894096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.894123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.894267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.894294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.894439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.894466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.894677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.894703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.894873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.894899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.895146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.895197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.895371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.895401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.895576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.895603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.895779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.895814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.895959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.896003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.896124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.896153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.896318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.896345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.896460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.896488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.896640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.896667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.896827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.896854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.897051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.897081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.897248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.897275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.897469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.897498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.897634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.897677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.897831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.897858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.898047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.898074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.898248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.898275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.158 [2024-07-22 12:28:09.898432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.158 [2024-07-22 12:28:09.898458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.158 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.898600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.898632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.898833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.898860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.899054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.899104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.899290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.899317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.899456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.899482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.899689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.899715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.899861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.899887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.900063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.900090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.900252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.900279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.900451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.900480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.900661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.900691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.900837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.900868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.901037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.901066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.901206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.901232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.901359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.901386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.901562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.901591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.901775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.901820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.901993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.902021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.902187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.902217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.902407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.902436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.902595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.902635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.902807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.902833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.902972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.903016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.903182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.903390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.903418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.903566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.903594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.903722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.903751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.903872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.903899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.904098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.904128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.904295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.904321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.904439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.904465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.904663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.904693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.904869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.904896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.905035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.905061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.905220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.905250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.905404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.905434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.905631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.905662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.905805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.905832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.905951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.905981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.906139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.906167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.159 qpair failed and we were unable to recover it. 00:33:02.159 [2024-07-22 12:28:09.906341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.159 [2024-07-22 12:28:09.906383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.906533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.906560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.906680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.906706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.906820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.906847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.907015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.907041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.907185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.907214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.907380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.907409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.907576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.907607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.907798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.907825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.907999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.908030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.908266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.908319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.908455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.908487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.908676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.908707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.908871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.908898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.909059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.909088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.909281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.909307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.909471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.909510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.909657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.909684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.909828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.909857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.910029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.910059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.910285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.910338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.910488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.910514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.910692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.910733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.910899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.910935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.911097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.911127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.911294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.911321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.911512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.911541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.911677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.911707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.911842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.911873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.912053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.912080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.912311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.912365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.912492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.912523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.912693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.912719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.912888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.912915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.913083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.913132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.913291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.913321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.160 [2024-07-22 12:28:09.913482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.160 [2024-07-22 12:28:09.913511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.160 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.913675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.913703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.913833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.913860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.914029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.914056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.914254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.914283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.914450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.914476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.914675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.914705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.914845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.914874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.915060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.915086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.915232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.915259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.915375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.915402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.915544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.915571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.915732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.915760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.915928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.915961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.916102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.916161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.916324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.916353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.916502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.916530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.916735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.916763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.916964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.917025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.917167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.917197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.917359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.917389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.917569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.917595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.917760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.917805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.917947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.917976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.918140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.918173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.918320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.918346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.918517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.918544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.918670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.918697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.918897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.918926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.919091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.919117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.919237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.919265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.919413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.919449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.919662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.919691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.919833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.919859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.920002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.920029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.920172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.920215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.920346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.920376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.920515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.920542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.920702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.920728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.920936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.920965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.921149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.161 [2024-07-22 12:28:09.921179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.161 qpair failed and we were unable to recover it. 00:33:02.161 [2024-07-22 12:28:09.921349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.921376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.921560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.921590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.921772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.921799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.921974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.922003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.922139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.922167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.922339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.922381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.922555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.922583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.922751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.922779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.922903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.922941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.923109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.923153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.923312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.923537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.923567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.923724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.923755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.923875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.923902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.924024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.924212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.924239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.924410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.924437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.924592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.924635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.924805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.924832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.924983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.925026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.925190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.925216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.925335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.925361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.925530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.925557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.925750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.925780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.925954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.925981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.926173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.926235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.926421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.926451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.926609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.926645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.926801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.926828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.926955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.926997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.927157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.927186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.927351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.927380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.927572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.927598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.927774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.927803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.927989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.928018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.928171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.928200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.928393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.928420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.162 qpair failed and we were unable to recover it. 00:33:02.162 [2024-07-22 12:28:09.928567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.162 [2024-07-22 12:28:09.928595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.928785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.928814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.929012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.929042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.929204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.929231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.929355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.929382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.929528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.929560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.929744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.929773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.929944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.929971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.930096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.930138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.930324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.930353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.930485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.930515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.930668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.930694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.930875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.930921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.931079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.931109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.931261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.931291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.931460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.931492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.931635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.931670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.931815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.931858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.931992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.932021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.932195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.932221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.932368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.932397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.932598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.932634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.932806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.932834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.933030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.933057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.933200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.933227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.933373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.933417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.933610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.933645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.933785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.933811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.933968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.934010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.934176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.934206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.934370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.934400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.934530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.163 [2024-07-22 12:28:09.934557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.163 qpair failed and we were unable to recover it. 00:33:02.163 [2024-07-22 12:28:09.934716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.934743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.934939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.934966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.935076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.935102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.935256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.935283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.935425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.935453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.935592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.935640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.935824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.935852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.936006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.936033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.936175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.936203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.936353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.936395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.936559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.936589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.936791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.936817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.936935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.936961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.937132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.937174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.937325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.937355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.937489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.937515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.937637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.937664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.937873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.937900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.938047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.938073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.938214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.938252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.938441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.938470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.938623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.938651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.938775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.938801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.938972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.939003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.939176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.939205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.939373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.939401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.939545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.939576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.939778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.939804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.939918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.939945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.940089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.940116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.940270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.940299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.940520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.940549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.940727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.940754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.941512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.941545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.941745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.941776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.941944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.941971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.942122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.942150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.942331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.942361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.942486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.942516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.164 [2024-07-22 12:28:09.942716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.164 [2024-07-22 12:28:09.942753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.164 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.942878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.942905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.943079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.943124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.943311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.943341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.943503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.943529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.943642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.943694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.943880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.943910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.944067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.944097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.944233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.944261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.944403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.944430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.944594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.944637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.944804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.944833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.945003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.945030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.945219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.945275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.945471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.945498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.945677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.945707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.945876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.945904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.946023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.946065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.946235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.946263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.946384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.946411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.946603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.946636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.946791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.946821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.947001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.947031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.947217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.947244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.947413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.947444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.947612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.947650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.947840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.947866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.948013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.948057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.948200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.948228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.948354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.948381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.948550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.948581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.948752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.948780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.948924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.948957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.949126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.949155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.949296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.949324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.949483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.949510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.949665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.949692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.949856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.165 [2024-07-22 12:28:09.949897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.165 qpair failed and we were unable to recover it. 00:33:02.165 [2024-07-22 12:28:09.950072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.950102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.950290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.950319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.950490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.950516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.950677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.950707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.950888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.950915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.951065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.951091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.951239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.951266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.951443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.951473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.951630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.951669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.951824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.951853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.952055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.952082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.952206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.952232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.952377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.952409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.952592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.952626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.952775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.952802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.952990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.953039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.953215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.953245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.953400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.953430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.953578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.953605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.953726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.953753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.953895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.953923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.954069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.954100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.954269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.954297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.954422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.954466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.954645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.954677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.954818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.954845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.955017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.955044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.955164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.955191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.955310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.955336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.955477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.955508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.955704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.955731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.955880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.955924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.956090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.956119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.956249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.956277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.956426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.956452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.956595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.956815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.956844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.956983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.957014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.957174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.957202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.957324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.957350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.957475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.166 [2024-07-22 12:28:09.957502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.166 qpair failed and we were unable to recover it. 00:33:02.166 [2024-07-22 12:28:09.957637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.957668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.957810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.957836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.957977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.958003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.958164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.958193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.958377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.958406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.958561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.958588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.958762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.958792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.958933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.958964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.959126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.959157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.959303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.959331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.959446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.959474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.959624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.959662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.959839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.959887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.960049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.960076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.960197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.960240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.960396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.960426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.960600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.960634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.960769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.960795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.960987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.961017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.961191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.961217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.961353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.961380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.961526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.961552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.961727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.961757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.961941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.961970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.962138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.962166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.962288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.962314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.962477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.962506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.962653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.962681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.962828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.962855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.962983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.963010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.963203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.963232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.963395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.963422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.963567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.963594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.963767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.167 [2024-07-22 12:28:09.963794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.167 qpair failed and we were unable to recover it. 00:33:02.167 [2024-07-22 12:28:09.963925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.963954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.964099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.964127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.964298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.964340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.964543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.964771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.964800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.964929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.964959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.965149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.965176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.965291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.965318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.965506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.965536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.965670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.965699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.965834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.965865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.966036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.966065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.966222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.966252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.966408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.966440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.966598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.966634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.966823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.966850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.966970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.966997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.967142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.967169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.967365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.967398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.967560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.967586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.967735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.967778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.967938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.967967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.968089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.968117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.968283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.968309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.968427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.968625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.968654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.968854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.968881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.969022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.969048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.969167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.969194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.969344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.969371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.969515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.969543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.969686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.969714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.969868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.969896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.970063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.970092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.970210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.970237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.970396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.970422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.970569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.970618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.970812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.970838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.970979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.971021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.971151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.168 [2024-07-22 12:28:09.971178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.168 qpair failed and we were unable to recover it. 00:33:02.168 [2024-07-22 12:28:09.971330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.971374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.971515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.971543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.971689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.971717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.971864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.971892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.971999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.972042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.972237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.972264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.972409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.972436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.972577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.972604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.972758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.972785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.972924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.972951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.973070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.973097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.973239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.973266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.973380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.973406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.973553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.973579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.973744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.973772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.973889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.973916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.974089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.974115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.974287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.974314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.974434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.974465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.974594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.974628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.974774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.974801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.974943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.974970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.975141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.975167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.975282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.975310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.975485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.975511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.975660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.975687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.975833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.975859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.976032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.976212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.976381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.976532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.976667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.976837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.976985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.977011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.977183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.977210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.977333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.977359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.977477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.977505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.977653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.977680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.977829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.977855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.977984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.978010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.978179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.169 [2024-07-22 12:28:09.978205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.169 qpair failed and we were unable to recover it. 00:33:02.169 [2024-07-22 12:28:09.978347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.978373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.978492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.978518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.978668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.978695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.978841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.978868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.978992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.979020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.979165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.979192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.979315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.979341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.979510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.979535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.979663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.979691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.979833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.979859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.979978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.980133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.980277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.980475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.980635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.980784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.980960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.980987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.981103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.981134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.981277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.981303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.981452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.981478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.981640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.981667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.981816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.981843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.982013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.982039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.982203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.982232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.982412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.982441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.982600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.982640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.982832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.982859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.983005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.983032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.983209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.983236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.983379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.983406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.983516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.983543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.983703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.983730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.983874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.983901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.984019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.984046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.984194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.984220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.984408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.984437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.984595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.984639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.984763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.984790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.984914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.984940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.985062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.985089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.170 [2024-07-22 12:28:09.985256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.170 [2024-07-22 12:28:09.985286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.170 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.985490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.985519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.985664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.985692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.985869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.985912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.986119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.986146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.986313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.986342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.986537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.986563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.986731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.986759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.986874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.986901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.987084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.987111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.987279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.987306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.987446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.987476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.987663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.987691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.987861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.987888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.988100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.988126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.988308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.988338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.988496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.988524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.988646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.988678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.988818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.988992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.989020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.989170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.989215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.989374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.989404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.989580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.989607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.989777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.989804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.989934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.989963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.990127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.990171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.990329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.990357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.990508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.990535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.990682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.990709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.990862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.990904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.991075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.991101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.991250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.991292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.991454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.991483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.991611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.991645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.991777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.991805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.991973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.992016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.992142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.992171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.992331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.992361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.992504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.992532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.992711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.992738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.992859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.992887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.171 [2024-07-22 12:28:09.993012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.171 [2024-07-22 12:28:09.993040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.171 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.993184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.993211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.993397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.993426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.993595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.993627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.993771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.993798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.993917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.993944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.994061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.994087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.994253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.994279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.994446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.994472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.994592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.994650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.994800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.994844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.995021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.995048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.995215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.995260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.995429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.995455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.995592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.995627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.995819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.995848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.996028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.996061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.996187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.996214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.996361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.996388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.996514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.996541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.996721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.996751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.996893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.996920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.997064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.997091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.997240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.997266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.997448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.997668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.997695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.997863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.997892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.998063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.998090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.998233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.998260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.998403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.998430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.998587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.998637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.998798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.998829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.999019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.999046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.999216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.999243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.999409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.172 [2024-07-22 12:28:09.999438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.172 qpair failed and we were unable to recover it. 00:33:02.172 [2024-07-22 12:28:09.999574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:09.999603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:09.999775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:09.999803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:09.999921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:09.999948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.000075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.000103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.000276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.000320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.000522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.000552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.000725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.000753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.000880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.000922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.001092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.001119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.001275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.001318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.001479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.001506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.001638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.001666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.001785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.001812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.001954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.001981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.002114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.002143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.002273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.002301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.002430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.002460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.002625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.002659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.002813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.002846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.003000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.003034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.003178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.003210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.003365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.003404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.003558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.003591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.003748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.003782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.003925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.003957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.004106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.004141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.004320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.004354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.004517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.004545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.004667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.004695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.004819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.004845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.004980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.005006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.005185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.005215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.005373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.005402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.005542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.005586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.005741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.005769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.005898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.005925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.006068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.006098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.006294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.006320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.006438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.006465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.006589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.006632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.006802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.173 [2024-07-22 12:28:10.006830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.173 qpair failed and we were unable to recover it. 00:33:02.173 [2024-07-22 12:28:10.006973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.006999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.007122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.007149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.007310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.007340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.007502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.007531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.007696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.007723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.007841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.007884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.008058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.008087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.008259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.008285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.008432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.008458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.008580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.008607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.008774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.008803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.008989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.009016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.009133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.009161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.009284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.009312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.009453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.009480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.009671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.009700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.009865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.009891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.010079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.010108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.010299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.010325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.010446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.010489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.010663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.010694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.010856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.010884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.011062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.011091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.011255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.011283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.011434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.011460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.011604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.011638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.011787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.011815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.011940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.011966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.012090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.012117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.012261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.012289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.012461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.012490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.012665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.012691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.012815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.012842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.012991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.013035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.013209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.013235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.013423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.013452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.013591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.013627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.013772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.013798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.014000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.014029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.014193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.014222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.174 [2024-07-22 12:28:10.014388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.174 [2024-07-22 12:28:10.014415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.174 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.014558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.014587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.014801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.014853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.015061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.015102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.015272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.015309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.015540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.015576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.015736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.015763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.015893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.015936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.016073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.016100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.016223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.016250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.016430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.016459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.016605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.016641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.016759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.016786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.016928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.016970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.017158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.017203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.017377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.017403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.017519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.017545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.017735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.017763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.017886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.017930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.018120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.018148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.018311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.018341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.018519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.018548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.018718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.018745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.018892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.018919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.019068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.019095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.019244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.019271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.019394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.019420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.019572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.019601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.019779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.019805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.019983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.020010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.020155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.020185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.020363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.020406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.020550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.020576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.020738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.020764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.020914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.020944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.021120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.021150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.021312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.021338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.021460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.021489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.021604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.021641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.021766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.021793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.021912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.021939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.022084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.175 [2024-07-22 12:28:10.022128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.175 qpair failed and we were unable to recover it. 00:33:02.175 [2024-07-22 12:28:10.022285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.022314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.022448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.022479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.022668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.022695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.022807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.022834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.022985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.023014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.023226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.023253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.023373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.023416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.023576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.023605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.023766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.023792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.023940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.023968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.024126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.024152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.024294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.024320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.024473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.024499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.024611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.024646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.024771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.024799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.024966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.024994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.025193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.025222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.025353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.025383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.025557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.025588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.025768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66490 is same with the state(5) to be set 00:33:02.176 [2024-07-22 12:28:10.025970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.026014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.026200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.026238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.026405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.026441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.026628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.026681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.026798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.026826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.026978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.027157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.027330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.027511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.027667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.027810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.027965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.027991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.028138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.028169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.028341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.028370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.028508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.176 [2024-07-22 12:28:10.028552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.176 qpair failed and we were unable to recover it. 00:33:02.176 [2024-07-22 12:28:10.028752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.028779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.028931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.028959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.029123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.029149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.029295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.029321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.029438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.029464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.029581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.029609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.029747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.029774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.029895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.029939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.030075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.030102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.030247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.030274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.030458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.030488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.030630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.030668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.030829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.030855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.031032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.031061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.031223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.031249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.031409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.031439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.031581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.031611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.031793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.031820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.031969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.031997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.032185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.032215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.032391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.032418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.032586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.032624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.032798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.032825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.032973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.033000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.033171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.033209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.033354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.033389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.033554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.033589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.033733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.033760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.033909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.033936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.034083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.034109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.034281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.034331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.034463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.034492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.034660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.034687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.034807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.034834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.034997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.035026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.035161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.035188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.035372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.035401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.035564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.035598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.035815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.035842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.036010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.036039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.036219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.177 [2024-07-22 12:28:10.036246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.177 qpair failed and we were unable to recover it. 00:33:02.177 [2024-07-22 12:28:10.036393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.036420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.036576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.036604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.036770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.036796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.036949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.036976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.037141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.037169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.037325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.037371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.037531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.037558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.037687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.037715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.037824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.037851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.038025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.038051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.038175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.038204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.038348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.038381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.038554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.038581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.038713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.038741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.038867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.038910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.039074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.039101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.039223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.039250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.039399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.039426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.039562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.039588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.039711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.039738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.039857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.039884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.040041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.040068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.040245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.040288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.040446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.040476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.040649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.040677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.040801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.040828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.040954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.040981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.041095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.041123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.041268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.041312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.041478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.041673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.041701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.041848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.041875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.042063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.042091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.042249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.042275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.042446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.042490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.042663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.042693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.042835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.042862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.043027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.043056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.043248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.043277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.043436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.043463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.043628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.178 [2024-07-22 12:28:10.043658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.178 qpair failed and we were unable to recover it. 00:33:02.178 [2024-07-22 12:28:10.043846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.043872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.043992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.044020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.044191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.044233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.044384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.044415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.044580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.044607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.044742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.044768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.044931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.044957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.045074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.045100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.045218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.045385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.045412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.045575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.045604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.045778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.045806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.045956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.045982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.046124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.046150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.046344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.046372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.046522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.046565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.046770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.046797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.046935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.046964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.047101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.047131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.047286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.047312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.047479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.047505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.047731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.047758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.047906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.048085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.048113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.048301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.048345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.048513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.048540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.048716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.048743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.048908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.048937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.049075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.049102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.049250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.049276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.049415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.049441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.049588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.049621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.179 [2024-07-22 12:28:10.049742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.179 [2024-07-22 12:28:10.049770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.179 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.049940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.049967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.050109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.050137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.050252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.050279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.050433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.050461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.050643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.050672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.050810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.050837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.051033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.051061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.051221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.051248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.051390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.051416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.051569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.051598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.051787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.051815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.051959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.051985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.052140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.052167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.052310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.052337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.052483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.052509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.052653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.052681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.052852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.052879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.053025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.053052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.053215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.053243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.467 [2024-07-22 12:28:10.053409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.467 [2024-07-22 12:28:10.053435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.467 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.053586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.053619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.053766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.053793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.053937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.053965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.054107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.054133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.054254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.054281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.054423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.054449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.054594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.054628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.054751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.054779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.054929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.054956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.055126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.055156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.055297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.055323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.055462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.055491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.055640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.055683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.055827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.055853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.055972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.056000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.056170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.056212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.056394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.056422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.056561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.056588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.056724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.056751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.056912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.056940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.057106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.057133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.057324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.057353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.057480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.057510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.057685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.057713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.057828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.057855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.057974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.058000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.058168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.058194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.058337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.058536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.058562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.058708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.058751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.058987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.059015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.059159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.059185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.059321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.059348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.059533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.059559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.059709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.059737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.059916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.059958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.060152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.060180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.060383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.060426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.060572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.468 [2024-07-22 12:28:10.060598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.468 qpair failed and we were unable to recover it. 00:33:02.468 [2024-07-22 12:28:10.060775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.060818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.060978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.061020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.061160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.061202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.061368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.061410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.061551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.061577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.061734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.061761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.061933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.061960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.062115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.062141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.062292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.062318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.062437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.062463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.062689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.062721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.062868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.062895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.063069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.063095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.063240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.063266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.063436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.063462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.063619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.063646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.063754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.063780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.063931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.063959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.064112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.064140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.064368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.064396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.064554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.064582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.064718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.064745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.065039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.065066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.065237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.065263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.065393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.065420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.065562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.065589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.065838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.065867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.066049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.066076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.066211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.066237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.066375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.066408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.066578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.066620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.066804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.066854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.068007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.068052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.068246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.068291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.068411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.068437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.068595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.068639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.068801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.068845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.068982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.069009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.069161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.069205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.069373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.069398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.469 qpair failed and we were unable to recover it. 00:33:02.469 [2024-07-22 12:28:10.069546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.469 [2024-07-22 12:28:10.069572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.069720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.069763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.069896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.069939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.070134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.070163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.070291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.070317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.070481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.070506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.070660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.070687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.070807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.070835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.070968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.070993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.071136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.071161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.071344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.071384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.071516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.071542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.071692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.071718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.071852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.071880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.072014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.072041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.072196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.072224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.072411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.072438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.072565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.072592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.072771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.072796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.073024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.073074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.073206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.073234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.073388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.073416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.073564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.073591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.073751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.073777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.073965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.073992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.074185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.074227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.074421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.074449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.074636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.074679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.074868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.074896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.075070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.075113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.075278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.075321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.075436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.075461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.075568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.075593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.075767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.075810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.075973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.076015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.076186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.076230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.076403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.076429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.076598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.076638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.076776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.076818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.076979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.077021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.077203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.470 [2024-07-22 12:28:10.077255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.470 qpair failed and we were unable to recover it. 00:33:02.470 [2024-07-22 12:28:10.077399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.077424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.077548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.077573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.077746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.077789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.077929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.077971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.078139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.078165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.078288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.078314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.078459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.078487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.078603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.078638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.078833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.078861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.079108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.079152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.079324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.079349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.079490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.079516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.079708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.079753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.079893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.079937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.080078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.080122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.080241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.080268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.080438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.080463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.080638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.080683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.080819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.080862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.081008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.081036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.081210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.081249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.081471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.081496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.081610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.081642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.081824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.081852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.081983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.082010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.082202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.082229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.082383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.082413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.082575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.082601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.082835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.082861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.083086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.083135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.083301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.083343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.083511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.083537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.471 [2024-07-22 12:28:10.083673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.471 [2024-07-22 12:28:10.083700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.471 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.083845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.083872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.084041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.084068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.084227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.084255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.084407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.084440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.084624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.084649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.084778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.084802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.084918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.084959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.085116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.085143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.085297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.085325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.085486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.085513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.085640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.085682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.085827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.085852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.086013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.086041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.086195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.086223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.086449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.086477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.086631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.086673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.086850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.086892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.087052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.087081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.087244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.087272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.087432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.087460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.087601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.087637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.087778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.087803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.087972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.087997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.088199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.088247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.088372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.088399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.088524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.088552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.088712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.088751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.088932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.088959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.089121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.089164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.089395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.089437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.089585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.089611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.089749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.089775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.089943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.089984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.090151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.090194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.090362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.090391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.090571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.090598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.090760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.090785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.090928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.090955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.091106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.091134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.091264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.091292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.091475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.472 [2024-07-22 12:28:10.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.472 qpair failed and we were unable to recover it. 00:33:02.472 [2024-07-22 12:28:10.091665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.091707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.091842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.091869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.092006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.092033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.092271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.092320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.092475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.092519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.092662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.092688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.092832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.092860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.093072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.093114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.093301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.093345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.093486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.093512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.093681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.093724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.093869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.094091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.094143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.094279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.094307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.094440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.094465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.094632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.094658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.094829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.094854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.095035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.095062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.095195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.095236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.095389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.095416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.095541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.095569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.095712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.095738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.095859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.095902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.096035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.096065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.096252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.096281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.096466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.096494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.096644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.096686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.096808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.096833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.096949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.096974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.097148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.097176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.097341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.097369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.097487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.097515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.097667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.097692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.097832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.097857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.098048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.098258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.098285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.098420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.098467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.098632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.098675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.098797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.098821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.098937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.098962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.099100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.473 [2024-07-22 12:28:10.099124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.473 qpair failed and we were unable to recover it. 00:33:02.473 [2024-07-22 12:28:10.099261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.099289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.099502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.099531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.099666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.099692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.099812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.099838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.100024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.100051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.100204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.100231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.100366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.100394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.100600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.100654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.100777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.100804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.100933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.100960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.101126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.101170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.101334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.101377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.101601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.101634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.101765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.101790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.101962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.102006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.102176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.102219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.102397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.102428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.102572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.102597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.102754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.102779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.102972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.102999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.103185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.103213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.103398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.103426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.103592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.103622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.103768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.103793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.103990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.104018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.104185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.104233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.104394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.104421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.104546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.104573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.104739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.104778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.105009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.105036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.105182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.105224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.105417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.105445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.105602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.105641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.105793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.105819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.105985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.106040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.106215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.106257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.106418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.106462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.106610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.106643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.106790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.106815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.106965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.106993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.107148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.107175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.474 qpair failed and we were unable to recover it. 00:33:02.474 [2024-07-22 12:28:10.107303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.474 [2024-07-22 12:28:10.107331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.107503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.107527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.107640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.107670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.107839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.107864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.108038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.108066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.108291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.108318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.108483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.108507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.108678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.108703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.108815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.108857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.108980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.109007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.109166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.109193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.109323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.109351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.109509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.109536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.109709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.109734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.109876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.109901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.110068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.110095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.110229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.110257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.110500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.110528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.110668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.110695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.110816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.110841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.110981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.111008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.111163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.111190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.111381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.111409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.111564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.111588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.111757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.111783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.111972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.111999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.112148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.112176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.112319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.112361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.112511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.112538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.112701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.112731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.112851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.112876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.113047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.113075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.113253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.113280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.113430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.113458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.113599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.113639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.113759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.113785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.113902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.113928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.114085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.114113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.114262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.114483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.114511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.114704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.114729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.114848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.475 [2024-07-22 12:28:10.114872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.475 qpair failed and we were unable to recover it. 00:33:02.475 [2024-07-22 12:28:10.115030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.115058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.115230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.115257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.115412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.115439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.115624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.115670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.115814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.115839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.115985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.116149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.116308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.116494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.116669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.116817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.116971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.116998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.117169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.117194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.117367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.117395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.117556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.117583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.117733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.117758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.117900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.117925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.118076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.118117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.118273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.118300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.118455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.118483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.118610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.118644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.118780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.118806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.118943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.118970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.119124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.119153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.119315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.119342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.119464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.119491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.119671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.119696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.119806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.119831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.119953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.119982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.120119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.120147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.120286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.120329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.120487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.120516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.120678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.120704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.120828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.120853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.476 [2024-07-22 12:28:10.120974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.476 [2024-07-22 12:28:10.120999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.476 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.121170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.121197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.121375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.121402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.121542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.121569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.121750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.121895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.121919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.122037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.122061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.122209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.122234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.122378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.122419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.122543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.122571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.122768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.122793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.122930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.122957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.123109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.123137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.123306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.123331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.123470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.123494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.123629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.123657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.123789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.123815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.123959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.124001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.124156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.124181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.124322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.124347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.124497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.124525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.124659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.124693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.124857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.124883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.125026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.125068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.125219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.125247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.125410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.125434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.125558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.125600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.125779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.125807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.125994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.126143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.126289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.126426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.126591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.126775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.126965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.126990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.127113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.127137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.127337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.127364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.127519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.127543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.127669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.127694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.127836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.127861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.477 [2024-07-22 12:28:10.128007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.477 [2024-07-22 12:28:10.128032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.477 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.128173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.128197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.128368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.128395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.128530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.128572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.128726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.128750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.128897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.128922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.129056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.129239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.129416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.129586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.129735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.129883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.129998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.130022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.130130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.130154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.130291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.130319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.130473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.130498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.130635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.130660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.130841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.130869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.131037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.131062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.131224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.131251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.131411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.131441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.131584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.131608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.131761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.131807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.131944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.131972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.132166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.132191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.132358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.132385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.132513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.132556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.132699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.132724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.132839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.132864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.133052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.133076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.133190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.133215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.133355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.133380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.133493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.133517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.133688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.133714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.133856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.133901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.134055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.134082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.134255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.134280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.134423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.134448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.134561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.134586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.134764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.134789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.134897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.134939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.135093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.478 [2024-07-22 12:28:10.135122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.478 qpair failed and we were unable to recover it. 00:33:02.478 [2024-07-22 12:28:10.135315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.135340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.135507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.135532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.135678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.135703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.135826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.135852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.136042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.136070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.136237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.136265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.136407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.136432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.136581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.136605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.136762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.136787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.136966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.136991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.137143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.137170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.137324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.137351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.137493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.137518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.137685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.137726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.137900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.137925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.138040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.138065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.138202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.138243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.138400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.138428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.138579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.138606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.138752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.138777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.138928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.138952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.139097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.139122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.139277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.139305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.139458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.139485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.139680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.139705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.139875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.139903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.140052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.140079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.140233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.140257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.140426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.140454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.140594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.140626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.140767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.140792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.140943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.140967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.141158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.141185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.141353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.141378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.141559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.141587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.141831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.141869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.142000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.142028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.142152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.142177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.142317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.142345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.142481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.142507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.142653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.142680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.479 [2024-07-22 12:28:10.142826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.479 [2024-07-22 12:28:10.142869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.479 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.143030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.143055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.143176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.143201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.143354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.143381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.143521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.143549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.143703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.143730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.143877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.143918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.144112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.144138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.144310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.144338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.144499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.144527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.144708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.144736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.144907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.144935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.145162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.145209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.145346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.145370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.145495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.145520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.145685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.145714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.145852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.145877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.146020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.146044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.146180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.146205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.146350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.146374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.146495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.146524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.146666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.146705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.146872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.146898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.147010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.147035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.147197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.147224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.147394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.147421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.147580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.147608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.147757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.147783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.147926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.147951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.148112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.148140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.148307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.148332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.148465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.148493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.148692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.148718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.148866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.148891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.149053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.480 [2024-07-22 12:28:10.149091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.480 qpair failed and we were unable to recover it. 00:33:02.480 [2024-07-22 12:28:10.149268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.149311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.149502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.149546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.149703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.149729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.149962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.150004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.150175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.150217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.150412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.150455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.150568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.150593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.150733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.150777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.150919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.150947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.151074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.151103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.151275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.151328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.151468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.151494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.151665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.151696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.151847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.151875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.152001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.152028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.152186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.152213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.152400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.152444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.152592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.152624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.152791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.152834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.152973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.153017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.153157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.153199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.153363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.153405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.153519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.153561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.153703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.153731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.153897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.153923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.154129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.154158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.154328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.154380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.154525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.154551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.154675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.154703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.154840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.154885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.155018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.155061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.155226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.155254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.155392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.155417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.155531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.155558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.155697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.155739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.155934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.155976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.156169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.156197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.156360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.156385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.156510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.156548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.156729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.156774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.481 [2024-07-22 12:28:10.156947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.481 [2024-07-22 12:28:10.156977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.481 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.157143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.157194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.157426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.157477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.157658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.157697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.157871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.157915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.158089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.158132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.158245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.158272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.158443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.158468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.158604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.158649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.158798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.158824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.159024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.159051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.159216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.159255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.159503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.159537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.159693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.159719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.159878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.159905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.160055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.160082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.160214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.160256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.160456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.160505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.160682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.160709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.160851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.160876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.161100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.161148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.161326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.161374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.161531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.161558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.161684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.161709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.161859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.161900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.162066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.162103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.162298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.162347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.162504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.162532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.162700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.162726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.162880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.162904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.163040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.163067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.163296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.163344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.163480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.163683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.163708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.163853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.163878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.164046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.164074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.164218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.164262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.164419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.164449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.164584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.164610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.164738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.164763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.164884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.164910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.165082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.482 [2024-07-22 12:28:10.165110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.482 qpair failed and we were unable to recover it. 00:33:02.482 [2024-07-22 12:28:10.165239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.165267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.165393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.165420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.165547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.165571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.165714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.165752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.165921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.165964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.166107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.166152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.166310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.166338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.166525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.166553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.166705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.166730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.166874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.166915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.167179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.167233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.167470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.167498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.167691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.167718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.167838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.167863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.168033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.168059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.168287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.168328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.168492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.168520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.168683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.168709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.168829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.168854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.169111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.169138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.169280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.169323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.169454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.169482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.169675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.169700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.169835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.169859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.169972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.170013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.170149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.170178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.170334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.170361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.170524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.170552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.170725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.170750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.170867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.170892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.171014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.171039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.171188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.171213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.171373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.171401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.171548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.171574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.171729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.171754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.171862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.171887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.172037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.172079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.172250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.172274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.172411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.172443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.172571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.172598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.172746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.172771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.483 qpair failed and we were unable to recover it. 00:33:02.483 [2024-07-22 12:28:10.172914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.483 [2024-07-22 12:28:10.172939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.173102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.173129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.173279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.173306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.173437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.173465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.173642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.173667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.173777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.173802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.173915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.173940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.174061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.174087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.174259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.174287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.174436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.174463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.174587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.174621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.174769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.174794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.174937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.174961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.175099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.175124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.175261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.175286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.175428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.175455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.175582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.175610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.175816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.175841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.175991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.176015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.176182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.176209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.176328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.176355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.176489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.176514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.176666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.176692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.176861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.176886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.177029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.177053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.177200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.177224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.177341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.177365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.177517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.177544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.177685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.177711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.177848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.177872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.178017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.178042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.178162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.178187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.178332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.178357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.178552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.178580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.178721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.178746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.484 qpair failed and we were unable to recover it. 00:33:02.484 [2024-07-22 12:28:10.178921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.484 [2024-07-22 12:28:10.178961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.179132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.179158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.179350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.179378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.179532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.179565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.179749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.179775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.179889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.179930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.180092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.180120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.180246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.180273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.180388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.180413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.180595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.180628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.180770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.180794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.180920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.180945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.181079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.181107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.181270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.181294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.181451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.181478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.181608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.181645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.181780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.181805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.182000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.182027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.182162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.182189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.182383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.182410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.182569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.182596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.182762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.182786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.182891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.182916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.183036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.183061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.183208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.183235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.183486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.183513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.183686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.183711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.183831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.183856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.183999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.184023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.184178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.184206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.184363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.184396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.184584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.184609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.184734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.184760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.184869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.184909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.185101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.185126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.185267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.185292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.185451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.185479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.185637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.185662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.185803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.185829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.185983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.186007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.186134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.186159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.186301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.485 [2024-07-22 12:28:10.186325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.485 qpair failed and we were unable to recover it. 00:33:02.485 [2024-07-22 12:28:10.186445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.186469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.186638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.186663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.186785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.186810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.186931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.186955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.187126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.187151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.187309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.187337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.187492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.187519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.187685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.187711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.187839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.187863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.187980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.188005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.188144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.188168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.188280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.188320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.188502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.188529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.188672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.188697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.188841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.188866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.189040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.189067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.189237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.189263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.189426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.189454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.189611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.189649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.189788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.189813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.189974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.189999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.190117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.190141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.190279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.190304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.190412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.190437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.190555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.190580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.190729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.190753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.190868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.190893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.191065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.191108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.191242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.191267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.191416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.191462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.191644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.191673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.191829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.191853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.192009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.192037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.192171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.192198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.192359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.192384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.192502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.192543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.192684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.192710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.192855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.192880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.193067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.193094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.193217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.193245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.193407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.486 [2024-07-22 12:28:10.193432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.486 qpair failed and we were unable to recover it. 00:33:02.486 [2024-07-22 12:28:10.193625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.193653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.193810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.193837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.194013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.194038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.194175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.194200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.194367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.194395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.194562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.194586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.194856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.194881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.195053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.195081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.195267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.195292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.195409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.195434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.195576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.195604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.195803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.195827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.195938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.195963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.196074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.196099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.196267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.196292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.196456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.196488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.196653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.196679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.196819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.196843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.196989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.197013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.197157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.197196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.197365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.197390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.197543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.197571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.197739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.197765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.197904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.197928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.198076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.198101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.198242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.198267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.198428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.198453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.198568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.198593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.198775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.198813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.198997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.199024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.199165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.199196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.199350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.199379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.199568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.199596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.199741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.199768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.199904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.199930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.200041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.200067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.200187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.200212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.200368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.200411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.200542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.200566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.200689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.200714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.200832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.200859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.487 qpair failed and we were unable to recover it. 00:33:02.487 [2024-07-22 12:28:10.201028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.487 [2024-07-22 12:28:10.201053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.201215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.201247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.201395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.201420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.201647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.201691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.201828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.201853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.202084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.202133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.202298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.202323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.202485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.202514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.202722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.202751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.202910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.203101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.203130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.203268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.203297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.203467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.203493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.203618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.203644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.203789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.203814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.203991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.204017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.204181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.204209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.204343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.204371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.204512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.204537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.204685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.204727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.204888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.204917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.205083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.205108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.205227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.205252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.205426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.205455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.205659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.205684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.205832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.205857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.205975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.206001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.206168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.206194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.206318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.206352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.206469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.206495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.206642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.206669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.206807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.206836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.207017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.207045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.207223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.207248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.207390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.207417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.207578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.207604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.207739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.207765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.207892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.207917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.208068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.208094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.208209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.488 [2024-07-22 12:28:10.208236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.488 qpair failed and we were unable to recover it. 00:33:02.488 [2024-07-22 12:28:10.208409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.208437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.208607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.208641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.208795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.208822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.209014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.209041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.209227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.209252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.209377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.209405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.209571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.209600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.209795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.209820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.209943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.209968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.210121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.210146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.210284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.210309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.210435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.210462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.210576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.210627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.210821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.210846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.210992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.211016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.211228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.211256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.211389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.211418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.211598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.211633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.211780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.211805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.211926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.211952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.212096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.212121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.212262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.212288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.212462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.212491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.212715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.212743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.212887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.212912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.213093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.213121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.213263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.213289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.213431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.213456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.213598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.213641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.213805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.213830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.213974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.214000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.214171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.214212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.214350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.214376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.214554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.214598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.214792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.214823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.489 qpair failed and we were unable to recover it. 00:33:02.489 [2024-07-22 12:28:10.214969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.489 [2024-07-22 12:28:10.214995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.215144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.215170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.215341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.215370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.215544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.215569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.215741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.215770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.215931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.215961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.216105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.216131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.216284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.216328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.216484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.216512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.216661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.216688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.216808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.216834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.216974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.217140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.217308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.217494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.217635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.217805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.217960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.217996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.218164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.218191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.218354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.218383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.218567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.218596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.218747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.218772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.218921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.218947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.219073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.219104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.219222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.219247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.219372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.219398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.219570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.219620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.219791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.219816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.220007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.220034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.220210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.220241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.220388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.220414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.220559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.220584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.220793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.220822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.220997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.221027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.221180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.221206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.221358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.221402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.221546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.221697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.221723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.221894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.221922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.222125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.222149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.222286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.222322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.490 qpair failed and we were unable to recover it. 00:33:02.490 [2024-07-22 12:28:10.222499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.490 [2024-07-22 12:28:10.222535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.222708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.222735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.222877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.222917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.223099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.223126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.223289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.223314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.223483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.223514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.223672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.223698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.223838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.223862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.223979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.224021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.224209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.224237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.224373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.224416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.224582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.224621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.224764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.224790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.224935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.224960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.225083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.225109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.225256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.225281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.225425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.225449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.225564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.225599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.225738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.225764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.225927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.225966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.226168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.226212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.226351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.226395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.226562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.226587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.226716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.226889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.226917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.227086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.227111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.227257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.227282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.227459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.227484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.227633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.227660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.227832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.227875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.228045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.228086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.228249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.228293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.228473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.228504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.228627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.228654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.228817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.228845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.228997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.229024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.229209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.229238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.229423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.229448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.229594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.229626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.229758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.229799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.229963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.230005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.491 qpair failed and we were unable to recover it. 00:33:02.491 [2024-07-22 12:28:10.230148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.491 [2024-07-22 12:28:10.230175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.230355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.230398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.230553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.230578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.230714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.230761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.230900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.230942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.231089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.231134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.231251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.231278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.231421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.231446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.231562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.231587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.231827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.231871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.232070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.232098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.232305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.232352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.232465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.232492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.232658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.232686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.232866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.232911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.233138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.233166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.233299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.233324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.233465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.233491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.233660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.233689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.233845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.233892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.234056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.234097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.234273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.234298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.234438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.234462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.234618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.234643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.234776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.234819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.234953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.234998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.235193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.235235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.235353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.235378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.235541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.235566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.235707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.235751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.235925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.235968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.236154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.236185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.236337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.236363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.236483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.236509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.236629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.236655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.236878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.236903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.237044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.237069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.237195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.237220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.237345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.237371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.237492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.237517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.237644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.237670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.492 qpair failed and we were unable to recover it. 00:33:02.492 [2024-07-22 12:28:10.237817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.492 [2024-07-22 12:28:10.237860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.237993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.238036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.238178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.238202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.238345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.238370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.238493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.238518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.238684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.238728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.238862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.238890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.239048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.239073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.239219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.239244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.239465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.239490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.239657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.239683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.239799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.239824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.239940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.239965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.240108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.240133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.240355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.240380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.240527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.240553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.240721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.240749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.240956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.240999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.241142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.241172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.241329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.241357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.241491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.241517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.241666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.241691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.241837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.241862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.242057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.242084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.242275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.242303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.242446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.242488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.242620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.242646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.242788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.242813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.242979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.243007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.243197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.243243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.243402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.243431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.243569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.243597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.243763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.243788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.243950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.243979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.244136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.493 [2024-07-22 12:28:10.244344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.493 [2024-07-22 12:28:10.244371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.493 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.244529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.244557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.244739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.244764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.244888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.244914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.245101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.245129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.245266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.245293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.245467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.245495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.245624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.245665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.245789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.245814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.245958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.245991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.246142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.246170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.246308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.246336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.246516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.246544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.246726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.246752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.246908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.246935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.247123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.247170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.247325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.247354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.247514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.247542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.247686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.247827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.247852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.248034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.248058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.248255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.248306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.248475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.248502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.248667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.248693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.248815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.248840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.248986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.249011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.249153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.249181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.249325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.249354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.249548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.249575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.249716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.249742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.249894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.249919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.250080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.250105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.250242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.250266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.250416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.250444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.250619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.250645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.250765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.250790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.250922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.250949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.251136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.251164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.251295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.251323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.251505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.251544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.251675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.251703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.251898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.494 [2024-07-22 12:28:10.251941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.494 qpair failed and we were unable to recover it. 00:33:02.494 [2024-07-22 12:28:10.252162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.252209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.252415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.252457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.252578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.252603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.252758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.252783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.252929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.252957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.253114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.253160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.253305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.253334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.253518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.253545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.253714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.253739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.253899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.253926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.254078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.254106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.254308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.254335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.254460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.254487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.254643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.254686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.254803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.254828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.254968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.254995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.255150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.255177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.255325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.255353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.255499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.255526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.255695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.255720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.255839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.255864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.256018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.256059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.256227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.256252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.256423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.256451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.256611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.256643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.256766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.256791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.256922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.256949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.257112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.257140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.257270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.257311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.257444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.257471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.257611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.257642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.257767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.257792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.257951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.257979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.258126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.258153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.258280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.258307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.258439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.258467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.258636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.258662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.258800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.258824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.258984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.259008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.259204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.259231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.259358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.259386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.495 [2024-07-22 12:28:10.259546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.495 [2024-07-22 12:28:10.259574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.495 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.259727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.259752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.259910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.259938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.260061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.260088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.260242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.260270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.260422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.260450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.260595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.260634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.260777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.260801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.260947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.260972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.261091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.261116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.261254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.261282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.261434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.261462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.261659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.261686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.261857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.261882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.262029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.262056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.262204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.262232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.262388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.262415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.262550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.262577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.262740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.262765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.262875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.262899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.263045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.263088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.263244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.263276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.263434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.263461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.263612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.263644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.263787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.263812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.263922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.263946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.264091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.264131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.264255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.264283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.264465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.264493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.264691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.264717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.264863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.264887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.265064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.265089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.265274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.265301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.265436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.265463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.265637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.265662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.265782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.265823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.265992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.266017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.266128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.266153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.266263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.266288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.266456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.266484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.266633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.266658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.266801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-07-22 12:28:10.266842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.496 qpair failed and we were unable to recover it. 00:33:02.496 [2024-07-22 12:28:10.266977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.267004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.267131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.267156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.267297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.267321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.267519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.267543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.267696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.267721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.267841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.267885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.268021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.268050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.268223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.268249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.268369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.268394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.268544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.268568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.268759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.268785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.268906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.268931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.269050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.269074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.269246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.269270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.269431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.269458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.269588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.269622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.269784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.269809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.269929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.269953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.270102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.270127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.270248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.270272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.270386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.270415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.270564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.270588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.270713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.270738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.270858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.270899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.271059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.271086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.271260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.271285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.271431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.271455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.271608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.271639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.271778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.271802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.271947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.271990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.272184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.272209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.272325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.272350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.272495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.272535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.272657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.272685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.272838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-07-22 12:28:10.272863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.497 qpair failed and we were unable to recover it. 00:33:02.497 [2024-07-22 12:28:10.273032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.273057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.273198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.273238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.273385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.273410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.273555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.273580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.273723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.273749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.273862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.273887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.274057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.274084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.274245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.274273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.274412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.274437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.274582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.274607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.274767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.274796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.274964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.274989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.275129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.275174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.275354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.275381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.275539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.275563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.275724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.275752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.275909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.275937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.276131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.276155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.276318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.276345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.276515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.276543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.276708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.276733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.276919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.276946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.277069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.277096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.277232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.277257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.277382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.277407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.277553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.277577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.277712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.277737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.277853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.277877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.278064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.278088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.278235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.278260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.278383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.278408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.278603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.278637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.278785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.278810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.278930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.278956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.279150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.279174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.279290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.279315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.279461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.279486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.279630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.279672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.279819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.279844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.279960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.279985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.280129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.280156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.498 [2024-07-22 12:28:10.280317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-07-22 12:28:10.280342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.498 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.280464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.280489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.280630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.280656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.280841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.280867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.280979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.281021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.281180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.281208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.281355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.281379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.281490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.281515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.281680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.281708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.281855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.281880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.282027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.282052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.282195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.282222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.282384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.282413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.282536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.282561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.282675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.282700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.282840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.282865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.283968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.283994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.284155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.284183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.284322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.284347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.284458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.284483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.284625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.284654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.284799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.284824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.284969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.284993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.285163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.285190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.285359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.285383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.285499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.285525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.285692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.285722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.285887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.285911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.286060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.286102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.286259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.286287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.286451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.286476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.286621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.286646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.286812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.286839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.286978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.287007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.287175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.287200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.287312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.287337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.499 qpair failed and we were unable to recover it. 00:33:02.499 [2024-07-22 12:28:10.287486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-07-22 12:28:10.287510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.287651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.287695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.287879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.287904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.288048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.288072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.288190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.288231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.288392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.288419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.288589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.288620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.288732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.288756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.288913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.288938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.289051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.289075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.289192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.289216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.289416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.289459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.289641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.289670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.289785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.289828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.289997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.290025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.290182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.290207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.290371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.290400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.290569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.290598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.290778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.290803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.290959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.290986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.291174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.291220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.291409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.291434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.291622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.291650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.291820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.291844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.291959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.291985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.292153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.292178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.292320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.292361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.292527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.292552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.292726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.292754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.292881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.292909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.293076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.293101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.293221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.500 [2024-07-22 12:28:10.293261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.500 qpair failed and we were unable to recover it. 00:33:02.500 [2024-07-22 12:28:10.293414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.293441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.293617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.293644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.293788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.293813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.294025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.294074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.294240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.294266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.294402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.294427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.294584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.294646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.294828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.294855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.294980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.295005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.295198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.295226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.295397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.295423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.295564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.295589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.295754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.295780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.295948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.295973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.296135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.296163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.296366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.296393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.296534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.296559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.296708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.296733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.296897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.296939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.297100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.297125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.297272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.297314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.297470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.297498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.297672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.297699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.297813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.297838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.297984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.298011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.298143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.298169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.298317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.298342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.298487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.298527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.501 [2024-07-22 12:28:10.298665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.501 [2024-07-22 12:28:10.298690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.501 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.298837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.298862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.299025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.299050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.299187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.299212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.299358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.299382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.299520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.299550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.299690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.299715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.299831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.299856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.300031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.300059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.300198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.300224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.300347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.300371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.300519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.300546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.300709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.300735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.300881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.300907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.301049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.301076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.301266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.301290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.301428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.301456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.502 qpair failed and we were unable to recover it. 00:33:02.502 [2024-07-22 12:28:10.301609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.502 [2024-07-22 12:28:10.301645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.301779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.301803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.301919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.301944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.302087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.302256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.302280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.302428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.302469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.302664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.302689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.302809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.302834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.302983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.303025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.303189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.303214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.303353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.303377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.303540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.303568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.303748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.303791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.303971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.303998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.304194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.304222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.304423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.304483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.304654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.304679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.304791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.304816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.304977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.305006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.305148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.305172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.305319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.305344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.305521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.305551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.305713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.305861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.305886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.306053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.306093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.306265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.306290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.306431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.306455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.306648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.306692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.306836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.306860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.306982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.307024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.307199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.307226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.307342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.307366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.307508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.307532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.307727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.307766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.307898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.307925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.308098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.308123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.308246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.308273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.308392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.308418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.308563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.308588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.308713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.308738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.308889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.308916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.309054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.309082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.309238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.309272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.503 [2024-07-22 12:28:10.309435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.503 [2024-07-22 12:28:10.309460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.503 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.309651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.309695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.309865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.309890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.310062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.310086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.310234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.310276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.310436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.310464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.310629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.310654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.310772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.310797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.310920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.310945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.311096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.311122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.311266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.311291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.311431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.311473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.311640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.311665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.311796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.311823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.311989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.312017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.312164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.312189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.312335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.312360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.312510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.312537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.312732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.312758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.312875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.313042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.313070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.313240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.313265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.313376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.313418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.313610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.313640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.313761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.313786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.313929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.313971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.314099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.314127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.314335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.314359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.314496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.314524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.314650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.314700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.314850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.314875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.315014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.315039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.315185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.315210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.315379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.315405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.315570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.315598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.315778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.315816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.315991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.316147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.316347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.316492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.316671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.316821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.316964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.316989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.317108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.317132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.317303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.317330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.317507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.317532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.317655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.317681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.317822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.317848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.318029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.318054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.318222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.504 [2024-07-22 12:28:10.318252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.504 qpair failed and we were unable to recover it. 00:33:02.504 [2024-07-22 12:28:10.318446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.318474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.318618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.318645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.318788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.318813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.318956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.318985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.319115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.319139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.319262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.319287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.319424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.319449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.319592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.319632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.319752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.319777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.319911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.319938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.320107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.320131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.320252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.320277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.320388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.320412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.320529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.320553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.320697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.320722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.320866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.320891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.321097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.321122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.321239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.321281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.321417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.321448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.321607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.321638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.321760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.321785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.321957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.322000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.322143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.322168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.322310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.322336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.322454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.322480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.322669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.322696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.322846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.322871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.322985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.323010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.323153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.323178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.323303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.323330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.323527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.323555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.323720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.323746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.323898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.323923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.324942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.324967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.325153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.325178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.325328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.325353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.325470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.325495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.325641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.325671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.325787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.325812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.325958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.325983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.326134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.326159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.326328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.326356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.326500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.505 [2024-07-22 12:28:10.326524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.505 qpair failed and we were unable to recover it. 00:33:02.505 [2024-07-22 12:28:10.326669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.326695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.326841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.326866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.327003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.327028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.327179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.327204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.327312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.327355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.327538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.327566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.327723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.327767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.327921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.327950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.328110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.328139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.328296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.328324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.328461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.328503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.328697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.328724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.328848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.328873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.329062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.329090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.329220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.329248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.329420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.329445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.329571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.329596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.329726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.329752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.329897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.329921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.330079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.330107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.330301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.330326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.330472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.330497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.330625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.330651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.330772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.330798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.330910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.330935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.331080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.331121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.331251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.331279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.331452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.331477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.331591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.331650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.331810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.331836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.331978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.332003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.332173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.332198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.332318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.332343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.332469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.332494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.332673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.332703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.332846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.332871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.333014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.333040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.333151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.333176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.333327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.333354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.506 qpair failed and we were unable to recover it. 00:33:02.506 [2024-07-22 12:28:10.333529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.506 [2024-07-22 12:28:10.333555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.333698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.333723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.333843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.333869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.334045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.334071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.334187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.334229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.334376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.334404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.334592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.334623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.334743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.334768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.334894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.334919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.335079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.335105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.335214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.335241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.335423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.335451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.335610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.335641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.335782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.335807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.335980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.336008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.336175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.336201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.336323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.336348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.336471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.336497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.336675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.336701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.336825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.336851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.336998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.337024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.337171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.337197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.337344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.337369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.337535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.337563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.337735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.337761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.337878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.337919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.338086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.338111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.338225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.338252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.338394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.338419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.338567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.338595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.338776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.338801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.338938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.338963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.339108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.339133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.339273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.339298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.339414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.339440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.339562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.339591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.339730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.339769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.339919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.339946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.340112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.340154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.340323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.340366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.340594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.340630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.340774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.340801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.340942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.340968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.341138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.341163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.341309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.341352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.341497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.341525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.341645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.341671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.341812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.341838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.342029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.342057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.507 [2024-07-22 12:28:10.342253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.507 [2024-07-22 12:28:10.342281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.507 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.342400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.342428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.342572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.342597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.342728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.342753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.342873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.342899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.343063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.343091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.343270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.343298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.343428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.343458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.343596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.343630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.343778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.343805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.344037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.344065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.344298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.344346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.344519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.344544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.344704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.344730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.344894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.344937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.345119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.345161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.345353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.345381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.345568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.345593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.345727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.345752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.345913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.345956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.346097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.346140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.346299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.346325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.346493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.346519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.346692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.346737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.346917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.346944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.347137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.347179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.347289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.347319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.347436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.347461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.347578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.347603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.347764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.347789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.347934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.347979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.348145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.348188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.348330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.348355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.348499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.348524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.348713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.348756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.348923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.348966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.349194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.349236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.349414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.349440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.349562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.349587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.349770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.349814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.349978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.350020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.350190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.350231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.350376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.350401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.350520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.350545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.350724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.350767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.350932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.350962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.351099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.351129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.351290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.351319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.351484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.351512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.508 qpair failed and we were unable to recover it. 00:33:02.508 [2024-07-22 12:28:10.351682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.508 [2024-07-22 12:28:10.351708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.351853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.351878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.352019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.352048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.352209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.352237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.352404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.352432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.352584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.352620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.352786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.352812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.352974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.353020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.353214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.353242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.353398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.353442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.353587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.353617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.353767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.353794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.353959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.353987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.354171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.354215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.354381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.354424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.354572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.354598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.354774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.354822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.354990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.355040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.355225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.355252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.355371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.355396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.355535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.355560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.355713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.355758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.355893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.355938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.356081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.356125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.356258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.356300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.356452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.356479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.356628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.356654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.356822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.356866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.357028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.357070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.357230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.357274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.357422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.357460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.357594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.357628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.357779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.357807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.357992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.358019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.358202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.358250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.358384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.358412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.358582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.358609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.358799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.358846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.358986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.359015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.359228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.359277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.359446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.359471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.359594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.359626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.359767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.359811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.360009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.360053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.360226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.360251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.360420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.360445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.360590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.360629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.360771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.360800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.360981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.361023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.361160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.509 [2024-07-22 12:28:10.361204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.509 qpair failed and we were unable to recover it. 00:33:02.509 [2024-07-22 12:28:10.361330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.361356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.361468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.361495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.361635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.361661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.361809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.361838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.362004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.362048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.362191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.362216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.362337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.362362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.362532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.362561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.362734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.362777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.363008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.363052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.363271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.363314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.363484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.363509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.363630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.363656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.363854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.363882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.364045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.364087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.364289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.364451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.364478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.364676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.364720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.364886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.364913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.365072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.365115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.365238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.365264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.365389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.365414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.365563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.365588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.365738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.365766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.365901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.365944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.366120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.366150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.366283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.366310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.366493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.366521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.366678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.366703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.366861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.366889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.367065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.510 [2024-07-22 12:28:10.367093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.510 qpair failed and we were unable to recover it. 00:33:02.510 [2024-07-22 12:28:10.367309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.367337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.367459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.367486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.367628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.367653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.367811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.367846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.367972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.368000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.368185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.368213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.368379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.368406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.368528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.368556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.368700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.368726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.368837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.368879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.369031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.369058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.369213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.369242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.369402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.369431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.369560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.369588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.369732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.369758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.369938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.511 [2024-07-22 12:28:10.369962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.511 qpair failed and we were unable to recover it. 00:33:02.511 [2024-07-22 12:28:10.370133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.800 [2024-07-22 12:28:10.370160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.800 qpair failed and we were unable to recover it. 00:33:02.800 [2024-07-22 12:28:10.370294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.800 [2024-07-22 12:28:10.370322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.800 qpair failed and we were unable to recover it. 00:33:02.800 [2024-07-22 12:28:10.370473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.800 [2024-07-22 12:28:10.370501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.800 qpair failed and we were unable to recover it. 00:33:02.800 [2024-07-22 12:28:10.370675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.800 [2024-07-22 12:28:10.370702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.800 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.370870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.370911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.371095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.371122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.371278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.371305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.371459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.371486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.371628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.371653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.371763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.371788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.371979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.372007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.372171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.372218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.372373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.372400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.372560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.372588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.372759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.372784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.372906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.372931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.373076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.373101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.373262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.373290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.373418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.373445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.373603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.373638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.373777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.373802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.373948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.373973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.374083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.374108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.374266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.374294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.374445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.374472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.374633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.374680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.374802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.374827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.374971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.374995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.375108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.375153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.375316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.375343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.375464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.375490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.375610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.375661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.375786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.375811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.375984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.376008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.376148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.376175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.376334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.376361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.376529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.376554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.376696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.376721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.376839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.376863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.377005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.377030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.377146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.377170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.377414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.377471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.377656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.377684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.377830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.377875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.801 [2024-07-22 12:28:10.378047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.801 [2024-07-22 12:28:10.378091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.801 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.378233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.378261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.378391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.378417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.378561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.378587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.378713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.378740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.378880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.378908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.379041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.379066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.379210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.379237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.379392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.379421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.379558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.379583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.379733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.379758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.379950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.379982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.380102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.380130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.380269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.380295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.380449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.380477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.380633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.380658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.380766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.380806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.380962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.380989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.381118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.381147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.381302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.381330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.381501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.381526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.381663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.381689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.381857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.381897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.382058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.382085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.382246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.382275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.382443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.382471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.382598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.382648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.382789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.382813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.382953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.382980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.383138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.383167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.383290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.383318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.383452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.383481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.383678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.383704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.383827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.383852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.383979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.384007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.384158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.384185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.384337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.384364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.384499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.384526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.384665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.384695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.384843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.384867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.385037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.385064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.385216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.802 [2024-07-22 12:28:10.385243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.802 qpair failed and we were unable to recover it. 00:33:02.802 [2024-07-22 12:28:10.385468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.385526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.385663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.385690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.385854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.385898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.386042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.386070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.386256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.386300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.386424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.386451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.386569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.386594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.386750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.386775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.386917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.386944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.387128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.387155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.387322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.387349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.387476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.387503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.387668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.387693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.387817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.387842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.387979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.388007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.388173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.388200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.388337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.388365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.388518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.388546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.388696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.388721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.388876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.388901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.389047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.389075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.389214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.389264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.389412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.389440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.389589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.389619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.389768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.389793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.389952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.389980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.390102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.390131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.390314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.390341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.390486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.390514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.390717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.390742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.390864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.390889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.391008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.391032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.391145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.391170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.391335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.391362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.391517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.391545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.391719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.391745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.391889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.391913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.392079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.392111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.392262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.392286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.392454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.392481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.392684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.392709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.803 qpair failed and we were unable to recover it. 00:33:02.803 [2024-07-22 12:28:10.392831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.803 [2024-07-22 12:28:10.392857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.393025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.393054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.393209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.393237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.393386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.393414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.393574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.393598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.393756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.393781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.393969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.393997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.394237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.394264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.394389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.394416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.394561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.394586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.394712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.394738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.394880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.394922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.395077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.395105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.395257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.395301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.395436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.395463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.395626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.395655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.395792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.395816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.395960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.395985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.396189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.396214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.396446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.396474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.396611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.396653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.396797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.396822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.396980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.397007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.397182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.397215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.397400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.397427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.397592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.397624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.397749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.397774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.397902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.397930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.398083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.398110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.398243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.398271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.398395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.398423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.398592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.398623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.398747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.398772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.398940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.398967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.399131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.399158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.399284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.399313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.399503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.399530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.399680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.399706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.399830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.399855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.399998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.400022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.400254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.400304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.804 qpair failed and we were unable to recover it. 00:33:02.804 [2024-07-22 12:28:10.400490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.804 [2024-07-22 12:28:10.400517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.400694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.400719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.400836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.400863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.401050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.401077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.401238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.401279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.401512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.401536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.401707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.401731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.401855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.401879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.402049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.402075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.402226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.402252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.402451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.402479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.402663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.402703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.402858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.402884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.403050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.403093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.403283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.403326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.403448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.403474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.403595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.403627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.403752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.403778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.403900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.403926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.404047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.404072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.404222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.404247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.404415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.404440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.404581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.404606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.404778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.404807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.404990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.405018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.405173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.405200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.405337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.405360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.405479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.405504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.405671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.405699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.405880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.405924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.406087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.406115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.406321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.406364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.406587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.406612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.805 [2024-07-22 12:28:10.406785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.805 [2024-07-22 12:28:10.406811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.805 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.406981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.407024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.407193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.407220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.407420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.407450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.407594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.407625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.407774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.407817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.408010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.408054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.408186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.408230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.408364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.408389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.408533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.408558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.408701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.408745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.408919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.408964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.409128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.409170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.409399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.409442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.409589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.409619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.409768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.409811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.409975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.410017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.410155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.410199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.410315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.410340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.410486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.410512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.410739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.410785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.410921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.410964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.411126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.411169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.411299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.411324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.411442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.411467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.411621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.411648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.411793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.411818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.411956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.411981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.412097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.412122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.412252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.412280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.412430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.412455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.412598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.412637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.412785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.412810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.412927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.412951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.413066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.413091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.413203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.413229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.413377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.413403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.413546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.413571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.413735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.413780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.413924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.413968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.414133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.414176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.806 [2024-07-22 12:28:10.414296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.806 [2024-07-22 12:28:10.414321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.806 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.414490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.414516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.414679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.414722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.414847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.414873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.414993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.415018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.415160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.415187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.415355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.415379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.415490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.415513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.415682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.415709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.415890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.415918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.416082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.416107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.416266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.416290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.416469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.416493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.416661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.416686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.416828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.416852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.417020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.417071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.417220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.417254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.417406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.417434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.417579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.417607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.417762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.417788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.417944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.417988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.418157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.418200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.418390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.418433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.418581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.418606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.418734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.418760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.418892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.418935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.419094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.419136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.419293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.419336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.419476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.419501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.419624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.419650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.419800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.419829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.419997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.420025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.420165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.420206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.420325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.420354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.420484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.420512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.420688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.420713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.420846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.420874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.421032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.421059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.421184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.421211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.421346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.421373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.421553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.421580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.421728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.807 [2024-07-22 12:28:10.421753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.807 qpair failed and we were unable to recover it. 00:33:02.807 [2024-07-22 12:28:10.421892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.421919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.422083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.422110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.422264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.422291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.422451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.422477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.422624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.422666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.422809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.422833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.422968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.423012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.423177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.423219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.423385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.423428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.423656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.423681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.423831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.423856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.424002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.424049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.424209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.424252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.424423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.424448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.424569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.424594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.424755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.424784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.424970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.424997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.425155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.425182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.425324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.425366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.425513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.425540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.425712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.425736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.425895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.425923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.426105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.426132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.426290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.426317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.426498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.426524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.426695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.426720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.426831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.426856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.427029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.427056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.427212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.427244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.427418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.427577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.427603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.427751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.427775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.427918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.427959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.428117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.428144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.428297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.428323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.428478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.428506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.428636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.428662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.428800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.428824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.428970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.428999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.429161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.429189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.429345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.429372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.808 qpair failed and we were unable to recover it. 00:33:02.808 [2024-07-22 12:28:10.429525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.808 [2024-07-22 12:28:10.429552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.429703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.429729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.429897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.429938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.430094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.430121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.430279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.430306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.430489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.430516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.430667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.430694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.430811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.430837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.430981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.431025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.431169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.431196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.431356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.431383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.431516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.431543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.431703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.431728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.431851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.431875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.432015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.432044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.432182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.432211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.432364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.432392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.432549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.432576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.432727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.432753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.432896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.432923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.433044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.433072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.433207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.433234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.433481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.433508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.433686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.433712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.433857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.433881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.434083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.434110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.434262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.434290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.434473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.434501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.434677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.434716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.434867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.434897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.435085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.435128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.435374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.435410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.435555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.435580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.435705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.435731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.435880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.435905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.436043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.809 [2024-07-22 12:28:10.436085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.809 qpair failed and we were unable to recover it. 00:33:02.809 [2024-07-22 12:28:10.436236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.436263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.436381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.436407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.436547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.436572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.436704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.436730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.436870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.436898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.437079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.437112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.437237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.437266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.437426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.437453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.437605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.437654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.437801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.437825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.437962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.437990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.438153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.438332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.438359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.438493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.438518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.438639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.438664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.438810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.438835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.438970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.438997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.439156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.439184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.439315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.439343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.439508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.439537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.439663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.439689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.439857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.439885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.440066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.440108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.440303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.440331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.440457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.440484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.440634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.440661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.440804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.440829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.440997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.441024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.441261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.441309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.441501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.441528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.441680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.441705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.441873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.441898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.442036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.442064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.442212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.442241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.442396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.442424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.442553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.442581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.442731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.442756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.442896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.442921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.443062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.443089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.443223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.443250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.443434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.443462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.810 [2024-07-22 12:28:10.443590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.810 [2024-07-22 12:28:10.443625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.810 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.443769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.443794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.443945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.443972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.444129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.444156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.444310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.444340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.444543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.444582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.444744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.444772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.444910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.444953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.445101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.445127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.445265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.445309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.445535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.445562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.445741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.445785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.445981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.446009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.446219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.446262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.446384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.446409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.446553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.446579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.446752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.446795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.446930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.446958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.447136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.447179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.447343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.447369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.447496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.447523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.447667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.447695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.447853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.447881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.448092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.448140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.448303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.448331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.448478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.448503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.448642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.448667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.448813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.448838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.449016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.449056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.449311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.449358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.449514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.449541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.449688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.449713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.449893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.449948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.450145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.450189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.450382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.450425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.450543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.450570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.450738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.450782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.450919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.450962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.451130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.451173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.451324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.451373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.451485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.451510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.811 [2024-07-22 12:28:10.451691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.811 [2024-07-22 12:28:10.451720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.811 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.451863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.451888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.452048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.452215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.452378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.452544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.452701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.452852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.452989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.453032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.453200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.453243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.453387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.453412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.453525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.453550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.453733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.453761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.453889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.453917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.454045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.454072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.454230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.454258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.454418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.454445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.454573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.454602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.454781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.454808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.454947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.454974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.455106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.455133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.455269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.455293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.455481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.455506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.455655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.455681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.455868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.455895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.456076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.456103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.456251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.456279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.456443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.456489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.456664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.456690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.456858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.456902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.457092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.457141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.457285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.457334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.457511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.457537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.457705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.457735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.457872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.457900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.458047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.458075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.458257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.458284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.458436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.458463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.458626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.458668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.458810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.458835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.459016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.459043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.459206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.812 [2024-07-22 12:28:10.459234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.812 qpair failed and we were unable to recover it. 00:33:02.812 [2024-07-22 12:28:10.459388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.459416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.459554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.459581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.459732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.459757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.459880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.459919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.460103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.460148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.460336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.460365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.460527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.460553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.460701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.460727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.460868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.460912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.461116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.461159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.461306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.461348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.461473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.461498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.461659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.461688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.461872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.461915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.462115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.462142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.462373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.462420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.462564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.462594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.462767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.462811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.463043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.463086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.463223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.463265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.463411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.463437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.463559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.463586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.463744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.463788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.463957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.464000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.464152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.464178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.464316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.464341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.464459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.464484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.464630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.464655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.464798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.464842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.465071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.465114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.465291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.465316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.465437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.465464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.465639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.465683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.465850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.465894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.466060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.813 [2024-07-22 12:28:10.466088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.813 qpair failed and we were unable to recover it. 00:33:02.813 [2024-07-22 12:28:10.466269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.466294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.466416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.466444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.466621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.466663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.466818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.466846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.467000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.467028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.467186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.467213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.467402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.467430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.467590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.467619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.467769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.467802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.467966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.467994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.468149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.468176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.468299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.468326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.468484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.468511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.468711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.468737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.468860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.468884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.469009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.469034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.469215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.469243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.469394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.469422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.469551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.469578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.469731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.469756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.469916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.469943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.470126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.470153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.470380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.470408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.470588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.470622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.470757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.470783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.470929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.470954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.471107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.471135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.471315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.471344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.471605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.471655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.471768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.471791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.471928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.471957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.472104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.472132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.472269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.472296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.472422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.472449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.472633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.472673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.472904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.472936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.473107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.473136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.473274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.473301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.473474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.473512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.473640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.473667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.473785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.473812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.814 [2024-07-22 12:28:10.473955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.814 [2024-07-22 12:28:10.473983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.814 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.474140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.474167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.474290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.474317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.474490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.474517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.474664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.474692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.474858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.474886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.475070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.475119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.475261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.475305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.475461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.475489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.475623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.475649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.475769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.475793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.475932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.475961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.476149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.476198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.476318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.476345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.476480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.476507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.476657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.476701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.476844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.476873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.477007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.477036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.477195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.477223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.477352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.477382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.477565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.477593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.477775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.477805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.477949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.477992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.478174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.478223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.478411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.478460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.478583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.478609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.478740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.478767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.478915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.478958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.479089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.479131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.479273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.479298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.479442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.479467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.479583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.479610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.479822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.479851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.480016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.480066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.480236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.480280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.480455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.480480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.480629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.480655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.480847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.480875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.481030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.481074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.481211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.481253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.481400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.481426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.815 qpair failed and we were unable to recover it. 00:33:02.815 [2024-07-22 12:28:10.481545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.815 [2024-07-22 12:28:10.481571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.481746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.481790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.481970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.482013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.482159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.482203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.482350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.482378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.482520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.482545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.482718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.482761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.482908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.482934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.483072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.483098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.483244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.483270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.483396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.483422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.483538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.483563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.483734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.483777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.483911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.483939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.484136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.484178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.484335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.484360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.484503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.484528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.484687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.484731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.484903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.484945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.485133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.485160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.485286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.485316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.485459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.485484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.485598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.485630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.486147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.486176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.486352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.486378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.486529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.486555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.486738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.486782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.486944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.486986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.487131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.487175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.487345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.487371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.487516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.487541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.487694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.487738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.487915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.487959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.488118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.488161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.488279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.488305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.488446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.488471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.488626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.488651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.488796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.488825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.489028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.489077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.489216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.489258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.489405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.489430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.816 qpair failed and we were unable to recover it. 00:33:02.816 [2024-07-22 12:28:10.489591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.816 [2024-07-22 12:28:10.489621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.489784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.489811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.489969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.490012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.490162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.490204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.490321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.490348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.490474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.490499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.490642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.490693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.490828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.490855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.490977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.491002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.491158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.491186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.491330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.491356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.491496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.491521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.491671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.491700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.491833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.491862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.492052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.492081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.492212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.492241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.492374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.492402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.492558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.492587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.492734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.492760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.492883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.492913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.493071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.493100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.493299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.493327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.493510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.493538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.493739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.493765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.493887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.493912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.494031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.494072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.494232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.494261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.494414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.494442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.494627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.494670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.494791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.494816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.494960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.494987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.495131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.495160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.495317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.495345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.495529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.495557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.495712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.495737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.495862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.495903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.496097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.496125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.817 [2024-07-22 12:28:10.496278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.817 [2024-07-22 12:28:10.496306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.817 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.496455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.496483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.496621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.496646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.496793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.496818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.496975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.497002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.497158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.497187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.497340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.497369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.497524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.497552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.497722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.497748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.497866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.497891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.498027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.498055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.498271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.498319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.498453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.498482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.498684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.498711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.498830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.498873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.499057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.499085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.499241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.499269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.499418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.499446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.499608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.499646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.499782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.499807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.499945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.500137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.500164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.500350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.500382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.500637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.500680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.500793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.500819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.500969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.500994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.501191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.501219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.501375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.501403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.501562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.501590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.501741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.501782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.501908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.501936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.502104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.502132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.502313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.502357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.502479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.502505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.502691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.502734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.502898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.502941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.503080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.503123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.503293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.503318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.503437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.503462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.818 [2024-07-22 12:28:10.503578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.818 [2024-07-22 12:28:10.503603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.818 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.503758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.503803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.503951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.503995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.504138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.504179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.504303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.504328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.504440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.504623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.504650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.504766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.504793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.504920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.504945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.505066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.505091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.505211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.505237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.505436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.505461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.505576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.505601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.505736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.505761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.505903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.505931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.506058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.506087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.506247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.506275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.506432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.506460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.506590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.506627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.506770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.506795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.506912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.506953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.507120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.507148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.507307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.507335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.507500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.507530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.507673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.507699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.507868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.507896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.508024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.508052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.508236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.508264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.508427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.508454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.508622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.508676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.508793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.508818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.508936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.508966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.509140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.509183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.509395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.509442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.509598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.509629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.509758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.509785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.509930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.509978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.510157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.510199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.510369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.510412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.510558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.510583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.510742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.510787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.819 [2024-07-22 12:28:10.510926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.819 [2024-07-22 12:28:10.510968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.819 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.511134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.511176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.511369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.511412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.511556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.511581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.511784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.511829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.512001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.512031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.512189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.512218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.512344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.512372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.512538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.512566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.512733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.512764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.512938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.512966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.513095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.513124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.513281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.513310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.513505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.513550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.513672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.513698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.513846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.513875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.514027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.514070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.514249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.514297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.514447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.514471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.514623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.514650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.514794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.514819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.515010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.515254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.515281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.515447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.515476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.515608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.515643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.515829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.515856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.516009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.516039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.516199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.516226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.516362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.516390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.516539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.516563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.516706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.516731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.516867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.516896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.517124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.517172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.517303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.517330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.517491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.517516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.517655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.517695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.517815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.517846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.518003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.518031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.518216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.518268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.518428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.518455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.518610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.518653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.518815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.820 [2024-07-22 12:28:10.518842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.820 qpair failed and we were unable to recover it. 00:33:02.820 [2024-07-22 12:28:10.519038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.519082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.519324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.519372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.519518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.519543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.519664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.519689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.519838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.519881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.520074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.520116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.520275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.520317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.520435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.520460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.520592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.520624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.520775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.520803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.520929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.520957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.521169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.521218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.521375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.521403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.521561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.521588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.521761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.521788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.521905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.521933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.522099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.522144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.522281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.522326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.522550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.522575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.522732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.522771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.522953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.522983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.523182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.523226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.523495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.523546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.523724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.523752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.523926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.523952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.524177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.524228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.524425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.524474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.524649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.524687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.524837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.524862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.525000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.525028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.525227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.525268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.525474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.525503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.525684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.525710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.525858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.525902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.526126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.526160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.526302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.526331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.526487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.526515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.526706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.526732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.526920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.526948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.527075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.527102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.527234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.821 [2024-07-22 12:28:10.527277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.821 qpair failed and we were unable to recover it. 00:33:02.821 [2024-07-22 12:28:10.527422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.527448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.527597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.527629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.527803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.527828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.527986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.528014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.528138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.528166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.528323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.528352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.528484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.528514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.528672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.528700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.528850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.528876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.529012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.529040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.529226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.529254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.529408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.529437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.529597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.529631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.529763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.529790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.529948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.529976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.530104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.530134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.530315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.530371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.530530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.530557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.530679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.530705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.530848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.530873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.531135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.531184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.531384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.531433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.531564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.531592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.531759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.531798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.531975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.532002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.532173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.532204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.532432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.532475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.532644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.532670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.532828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.532870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.533037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.533081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.533242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.533269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.533413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.533439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.533560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.533586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.533761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.533810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.533982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.534012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.822 [2024-07-22 12:28:10.534248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.822 [2024-07-22 12:28:10.534297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.822 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.534460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.534508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.534687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.534712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.534832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.534857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.535047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.535087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.535280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.535308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.535555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.535603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.535753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.535778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.535906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.535945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.536104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.536131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.536373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.536422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.536594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.536626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.536781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.536819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.536961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.537000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.537139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.537183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.537444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.537493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.537638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.537664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.537891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.537933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.538122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.538165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.538318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.538366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.538488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.538514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.538656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.538687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.538861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.538889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.539074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.539118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.539233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.539260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.539437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.539467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.539650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.539676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.539812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.539856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.540054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.540097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.540222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.540249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.540368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.540395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.540537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.540563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.540735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.540780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.540941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.540984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.541126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.541152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.541277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.541303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.541431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.541459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.541608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.541656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.541775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.541803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.541942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.541970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.542095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.542124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.823 [2024-07-22 12:28:10.542314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.823 [2024-07-22 12:28:10.542362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.823 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.542519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.542544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.542660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.542685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.542825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.542850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.543008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.543060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.543285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.543330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.543463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.543492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.543632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.543659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.543773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.543798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.543936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.543961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.544181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.544233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.544366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.544398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.544564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.544591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.544763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.544788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.544930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.544973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.545169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.545219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.545373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.545400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.545531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.545560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.545727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.545753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.545902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.545927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.546095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.546120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.546249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.546278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.546446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.546474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.546635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.546678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.546821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.546846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.547040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.547068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.547299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.547346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.547483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.547508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.547660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.547686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.547800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.547824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.547940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.547965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.548145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.548172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.548309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.548349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.548501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.548529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.548789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.548827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.549064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.549108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.549259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.549302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.549470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.549511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.549666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.549692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.549853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.549896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.550129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.550177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.824 qpair failed and we were unable to recover it. 00:33:02.824 [2024-07-22 12:28:10.550402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.824 [2024-07-22 12:28:10.550445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.550618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.550643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.550762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.550787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.550951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.550979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.551149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.551176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.551353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.551383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.551561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.551589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.551767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.551792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.551938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.551966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.552097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.552124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.552257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.552283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.552441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.552468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.552590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.552622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.552764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.552787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.552926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.552952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.553134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.553161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.553298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.553325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.553537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.553568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.553766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.553805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.553932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.553976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.554104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.554133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.554315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.554343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.554482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.554524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.554695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.554721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.554870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.554900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.555040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.555069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.555238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.555267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.555398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.555426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.555582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.555609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.555790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.555816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.555965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.555991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.556105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.556131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.556247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.556273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.556421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.556451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.556575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.556605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.556777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.556803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.556941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.556966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.557155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.557182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.557350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.557378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.557542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.557570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.557725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.557751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.825 [2024-07-22 12:28:10.557875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.825 [2024-07-22 12:28:10.557900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.825 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.558109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.558136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.558292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.558319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.558484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.558512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.558655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.558680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.558819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.558844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.559006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.559033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.559265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.559317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.559494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.559519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.559636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.559663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.559830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.559869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.560040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.560084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.560256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.560286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.560537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.560585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.560718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.560744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.560922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.560965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.561097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.561139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.561310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.561357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.561525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.561551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.561731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.561770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.561915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.561945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.562097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.562126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.562304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.562353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.562495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.562526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.562678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.562704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.562828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.562870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.562996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.563025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.563182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.563210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.563368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.563398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.563532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.563560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.563731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.563756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.563918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.563975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.564140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.564169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.564325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.564353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.564510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.564537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.564706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.826 [2024-07-22 12:28:10.564732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.826 qpair failed and we were unable to recover it. 00:33:02.826 [2024-07-22 12:28:10.564856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.564882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.565103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.565155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.565288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.565316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.565444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.565473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.565633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.565660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.565782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.565807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.565976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.566004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.566187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.566215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.566371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.566399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.566557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.566585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.566769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.566808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.566937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.566964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.567137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.567181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.567374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.567402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.567570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.567596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.567730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.567757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.567915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.567943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.568159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.568187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.568360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.568418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.568553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.568578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.568728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.568772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.568967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.569010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.569169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.569197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.569361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.569390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.569538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.569576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.569754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.569783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.569926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.569969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.570184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.570234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.570372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.570400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.570548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.570574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.570723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.570747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.570891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.570932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.571089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.571116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.571333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.571360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.571495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.571521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.571674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.571700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.571843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.571867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.572008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.572036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.572195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.572223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.572376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.572403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.572559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.827 [2024-07-22 12:28:10.572583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.827 qpair failed and we were unable to recover it. 00:33:02.827 [2024-07-22 12:28:10.572703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.572733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.572881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.572905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.573042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.573070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.573234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.573262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.573424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.573450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.573606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.573652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.573766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.573790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.573937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.573964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.574144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.574170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.574331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.574359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.574507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.574545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.574699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.574726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.574883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.574926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.575087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.575125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.575266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.575293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.575471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.575497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.575663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.575689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.575831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.575875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.576035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.576077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.576270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.576298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.576456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.576482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.576672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.576701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.576882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.576908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.577069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.577112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.577257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.577282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.577423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.577448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.577642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.577688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.577886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.577929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.578073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.578116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.578288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.578313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.578450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.578475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.578624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.578650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.578818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.578861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.579052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.579094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.579256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.579280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.579435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.579462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.579633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.579659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.579798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.580030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.580073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.580213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.580257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.580409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.828 [2024-07-22 12:28:10.580441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.828 qpair failed and we were unable to recover it. 00:33:02.828 [2024-07-22 12:28:10.580567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.580592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.580739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.580768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.580950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.580977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.581162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.581214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.581351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.581379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.581537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.581561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.581676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.581701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.581845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.581870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.582035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.582063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.582306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.582356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.582493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.582522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.582689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.582714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.582834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.582857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.583018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.583046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.583177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.583205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.583388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.583415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.583568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.583595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.583736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.583761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.583926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.583953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.584133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.584159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.584506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.584559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.584703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.584727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.584869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.584908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.585045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.585087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.585247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.585274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.585458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.585485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.585646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.585671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.585847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.585872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.586044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.586071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.586250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.586277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.586456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.586484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.586645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.586670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.586817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.586842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.587010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.587052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.587179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.587207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.587440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.587467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.587668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.587693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.587840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.587865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.588060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.588108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.588264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.588292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.588474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.588506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.829 [2024-07-22 12:28:10.588713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.829 [2024-07-22 12:28:10.588753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.829 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.588930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.588973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.589113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.589157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.589357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.589401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.589552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.589577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.589784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.589828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.590020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.590066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.590191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.590234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.590376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.590402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.590552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.590578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.590723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.590751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.590925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.590952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.591086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.591113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.591276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.591303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.591484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.591510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.591642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.591668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.591842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.591867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.592063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.592106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.592277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.592492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.592517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.592639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.592665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.592808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.592833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.593008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.593050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.593192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.593220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.593375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.593400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.593526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.593551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.593715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.593760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.593902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.593931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.594090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.594118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.594268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.594294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.594451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.594475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.594590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.594619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.594761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.594801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.594927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.594954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.595089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.595116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.830 [2024-07-22 12:28:10.595264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.830 [2024-07-22 12:28:10.595291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.830 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.595449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.595473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.595592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.595630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.595810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.595837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.596016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.596043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.596233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.596260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.596419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.596446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.596590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.596621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.596770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.596794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.596976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.597002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.597270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.597320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.597455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.597483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.597636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.597678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.597786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.597810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.597948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.597988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.598122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.598151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.598330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.598357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.598521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.598547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.598698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.598726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.598844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.598869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.598987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.599026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.599181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.599208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.599366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.599393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.599552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.599579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.599755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.599780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.599927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.599952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.600086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.600110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.600262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.600288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.600466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.600494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.600688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.600714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.600862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.600901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.601084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.601110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.601303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.601331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.601465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.601492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.601653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.601678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.601795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.601819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.601960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.601988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.602163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.602190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.602348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.602374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.602520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.602545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.602720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.602745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.602908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.831 [2024-07-22 12:28:10.602934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.831 qpair failed and we were unable to recover it. 00:33:02.831 [2024-07-22 12:28:10.603088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.603115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.603327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.603376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.603547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.603572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.603730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.603759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.603880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.603904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.604045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.604085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.604217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.604243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.604398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.604427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.604571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.604596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.604742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.604767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.604915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.604939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.605076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.605104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.605234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.605261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.605437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.605464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.605643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.605685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.605833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.605858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.606035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.606060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.606183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.606224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.606354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.606381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.606498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.606525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.606694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.606719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.606860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.606885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.607009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.607036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.607164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.607191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.607326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.607369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.607499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.607528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.607705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.607730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.607849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.607874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.608040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.608068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.608219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.608246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.608377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.608404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.608566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.608591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.608715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.608740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.608864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.608889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.609024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.609053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.609221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.609250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.609388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.609416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.609551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.609575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.609751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.609776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.609894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.609918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.610106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.610134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.832 [2024-07-22 12:28:10.610317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.832 [2024-07-22 12:28:10.610344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.832 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.610469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.610496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.610626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.610668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.610813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.610841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.610979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.611006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.611190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.611217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.611340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.611369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.611539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.611565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.611716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.611741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.611884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.611909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.612072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.612100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.612285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.612310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.612472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.612499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.612694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.612719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.612836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.612861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.613009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.613051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.613203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.613231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.613407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.613432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.613572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.613596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.613748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.613775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.613918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.613943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.614059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.614085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.614230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.614255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.614421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.614446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.614609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.614644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.614776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.614804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.614972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.614996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.615142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.615185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.615367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.615395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.615563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.615587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.615715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.615744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.615864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.615904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.616043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.616068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.616242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.616266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.616414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.616438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.616584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.616609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.616730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.616755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.616895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.616920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.617038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.617063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.617181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.617206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.617392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.617417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.617582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.833 [2024-07-22 12:28:10.617607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.833 qpair failed and we were unable to recover it. 00:33:02.833 [2024-07-22 12:28:10.617738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.617763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.617908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.617933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.618050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.618074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.618196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.618220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.618365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.618390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.618531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.618556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.618680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.618705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.618850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.618875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.619064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.619089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.619207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.619249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.619406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.619432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.619601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.619643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.619787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.619829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.619994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.620131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.620263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.620448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.620620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.620752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.620918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.620944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.621161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.621185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.621322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.621349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.621507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.621535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.621670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.621695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.621831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.621855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.622031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.622058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.622255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.622279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.622428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.622452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.622597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.622627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.622784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.622813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.622939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.622979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.623137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.623166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.623297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.623322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.623470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.623494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.623643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.623672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.623832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.623856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.624052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.624080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.624219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.624247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.624426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.624451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.624569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.624612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.624771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.834 [2024-07-22 12:28:10.624799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.834 qpair failed and we were unable to recover it. 00:33:02.834 [2024-07-22 12:28:10.624960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.624985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.625106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.625132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.625334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.625362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.625538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.625562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.625711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.625737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.625898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.625925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.626110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.626135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.626250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.626290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.626459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.626484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.626628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.626653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.626819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.626846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.627003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.627030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.627193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.627220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.627394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.627437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.627571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.627598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.627755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.627780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.627901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.627926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.628094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.628122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.628283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.628308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.628448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.628491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.628626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.628655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.628823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.628848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.629033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.629225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.629396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.629534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.629675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.629812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.629977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.630020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.630200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.630225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.630371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.630395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.630537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.630561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.630709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.630734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.630850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.630874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.631019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.835 [2024-07-22 12:28:10.631044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.835 qpair failed and we were unable to recover it. 00:33:02.835 [2024-07-22 12:28:10.631213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.631240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.631374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.631399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.631541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.631566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.631755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.631784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.631923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.631948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.632059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.632084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.632200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.632224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.632394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.632419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.632592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.632628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.632782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.632810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.632943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.632968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.633113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.633137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.633333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.633361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.633526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.633551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.633668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.633695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.633890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.633915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.634057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.634083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.634276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.634304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.634428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.634455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.634589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.634623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.634773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.634799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.634981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.635013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.635149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.635174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.635317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.635357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.635509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.635537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.635699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.635725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.635869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.635894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.636042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.636072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.636238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.636263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.636383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.636408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.636547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.636571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.636723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.636748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.636870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.636895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.637012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.637038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.637182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.637206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.637369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.637396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.637570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.637594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.637746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.637772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.637915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.637955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.638133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.638160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.638308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.638332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-07-22 12:28:10.638474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.836 [2024-07-22 12:28:10.638499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.638675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.638703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.638861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.638885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.639027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.639068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.639231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.639371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.639396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.639531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.639556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.639765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.639793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.639939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.639964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.640076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.640102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.640266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.640293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.640434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.640459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.640606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.640654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.640815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.640843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.640974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.640998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.641147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.641172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.641316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.641357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.641531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.641556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.641709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.641737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.641864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.641891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.642046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.642071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.642189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.642219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.642432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.642457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.642577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.642601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.642723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.642749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.642899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.642927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.643115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.643140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.643261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.643285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.643453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.643494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.643654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.643679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.643792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.643816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.643951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.643978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.644137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.644162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.644307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.644331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.644438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.644463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.644631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.644656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.644820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.644848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.644999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.645023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.645168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.645192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.645381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.645408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.645540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.645567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.645732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.645758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.837 qpair failed and we were unable to recover it. 00:33:02.837 [2024-07-22 12:28:10.645882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.837 [2024-07-22 12:28:10.645907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.646051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.646076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.646227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.646253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.646397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.646422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.646559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.646586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.646738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.646763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.646951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.646982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.647164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.647191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.647328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.647353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.647537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.647565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.647733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.647761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.647923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.647947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.648112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.648139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.648305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.648330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.648498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.648523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.648657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.648686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.648844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.648872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.649007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.649032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.649181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.649223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.649352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.649380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.649539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.649564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.649733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.649759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.649875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.649917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.650048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.650073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.650218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.650258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.650388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.650415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.650609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.650639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.650798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.650825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.650988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.651013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.651155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.651180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.651318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.651343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.651482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.651510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.651697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.651722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.651838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.651863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.652035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.652076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.652239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.652263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.652415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.652440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.652584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.652609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.652765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.652790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.652925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.652956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.653141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.653169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.653334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.838 [2024-07-22 12:28:10.653359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.838 qpair failed and we were unable to recover it. 00:33:02.838 [2024-07-22 12:28:10.653481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.653522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.653659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.653687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.653863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.653888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.654012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.654055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.654214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.654241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.654403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.654432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.654551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.654576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.654742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.654771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.654927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.654952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.655093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.655135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.655324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.655352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.655523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.655548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.655668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.655711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.655874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.655902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.656066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.656090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.656257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.656299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.656447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.656474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.656608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.656639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.656783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.656824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.656968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.657008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.657176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.657200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.657343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.657368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.657534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.657562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.657711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.657736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.657845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.657870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.658004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.658031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.658197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.658222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.658362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.658386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.658559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.658588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.658764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.658789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.658906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.658946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.659127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.659155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.659333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.659362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.659545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.659573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.659765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.659790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.839 [2024-07-22 12:28:10.659961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.839 [2024-07-22 12:28:10.659986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.839 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.660108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.660133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.660251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.660276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.660412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.660437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.660595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.660629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.660757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.660785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.660926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.660951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.661091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.661116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.661307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.661331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.661473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.661499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.661641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.661667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.661775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.661800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.661913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.661937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.662052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.662077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.662198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.662225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.662338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.662363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.662504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.662544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.662689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.662717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.662857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.662882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.663029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.663071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.663253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.663281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.663436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.663461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.663587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.663623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.663747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.663772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.663942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.663966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.664135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.664164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.664295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.664324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.664484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.664509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.664668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.664696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.664857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.664884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.665075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.665100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.665258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.665285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.665408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.665435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.665599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.665631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.665756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.665780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.665925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.665949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.666094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.666119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.666263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.666288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.666455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.666486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.666606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.666636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.666796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.666822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.666985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.840 [2024-07-22 12:28:10.667008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.840 qpair failed and we were unable to recover it. 00:33:02.840 [2024-07-22 12:28:10.667126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.667150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.667294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.667318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.667478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.667505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.667672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.667698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.667891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.667919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.668104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.668132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.668272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.668297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.668443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.668468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.668651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.668679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.668856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.668881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.669000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.669041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.669196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.669225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.669429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.669454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.669625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.669653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.669837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.669865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.670056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.670200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.670242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.670363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.670391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.670560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.670585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.670738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.670763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.670933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.670958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.671125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.671151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.671312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.671339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.671520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.671553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.671692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.671838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.671864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.672057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.672085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.672245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.672270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.672419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.672444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.672563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.672588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.672704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.672729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.672848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.672873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.673067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.673095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.673222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.673247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.673394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.673419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.673576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.673601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.673735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.673759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.673918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.673957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.674115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.674145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.674331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.674356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.674523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.674566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.841 qpair failed and we were unable to recover it. 00:33:02.841 [2024-07-22 12:28:10.674745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.841 [2024-07-22 12:28:10.674772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.674919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.674946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.675102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.675127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.675267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.675292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.675434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.675459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.675576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.675641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.675783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.675808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.675930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.675954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.676071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.676096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.676262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.676463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.676489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.676647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.676676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.676832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.676857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.676998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.677025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.677173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.677198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.677315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.677340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.677458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.677483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.677688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.677744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.677865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.677891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.678039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.678066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.678218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.678244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.678387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.678413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.678555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.678581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.678744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.678770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.678936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.678964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.679126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.679152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.679292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.679331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.679488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.679514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.679663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.679689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.679804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.679829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.679975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.680181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.680323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.680498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.680656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.680798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.680936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.680965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.681130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.681169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.681304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.681333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.681486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.681514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.681657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.681682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.681828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.681852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.682014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.842 [2024-07-22 12:28:10.682041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.842 qpair failed and we were unable to recover it. 00:33:02.842 [2024-07-22 12:28:10.682235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.682284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.682513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.682542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.682714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.682741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.682864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.682890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.683059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.683084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.683246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.683274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.683458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.683486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.683626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.683652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.683799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.683825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.683999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.684024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.684254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.684304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.684432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.684459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.684591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.684628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.684790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.684815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.685002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.685030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.685195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.685245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.685406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.685434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.685565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.685590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.685751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.685777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.685955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.686006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.686129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.686156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.686332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.686375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.686555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.686580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.686703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.686728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.686872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.686897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.687063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.687090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.687212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.687239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.687397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.687425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.687584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.687628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.687759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.687788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.687984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.688028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.688168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.688211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.688380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.688423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.688563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.688588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.688722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.688749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.688889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.688917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.689076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.689103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.689278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.689331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.689458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.843 [2024-07-22 12:28:10.689486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.843 qpair failed and we were unable to recover it. 00:33:02.843 [2024-07-22 12:28:10.689622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.689668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.689830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.689857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.690038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.690065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.690197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.690227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.690409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.690437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.690635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.690677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.690820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.690845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.691018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.691046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.691201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.691228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.691443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.691492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.691654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.691680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.691824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.691849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.692020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.692048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.692208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.692261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.692397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.692425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.692552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.692580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.692742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.692767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.692906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.692931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.693122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.693172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.693332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.693359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.693488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.693515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.693687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.693713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.693833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.693862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.694002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.694030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.694185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.694212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.694376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.694404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.694563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.694592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.694760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.694785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.694946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.694985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.844 [2024-07-22 12:28:10.695160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.844 [2024-07-22 12:28:10.695189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.844 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.695378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.695427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.695571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.695597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.695720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.695746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.695937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.695980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.696224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.696280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.696423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.696466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.696600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.696634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.696832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.696875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.697038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.697080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.697206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.697248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.697420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.697446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.697574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.697601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.697770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.697814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.697972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.698014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.698185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.698229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.698351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.698376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.698534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.698561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.698728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.698757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.698883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.698911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.699083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.699144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.699326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.699354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.699510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.699538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.699705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.699730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.699871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.699900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.700061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.700088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.700224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.700252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.700410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.700438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.700567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.700595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.700772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.700800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.700957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.701000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.701163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.701209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.701357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.701382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.701526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.701551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:02.845 [2024-07-22 12:28:10.701703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.845 [2024-07-22 12:28:10.701746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:02.845 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.701938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.701981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.702112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.702141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.702316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.702359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.702508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.702536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.702683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.702709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.702856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.702896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.703026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.703095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.703270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.703300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.703447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.703476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.703624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.703650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.703775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.703800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.703930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.703955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.704093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.704127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.704289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.704316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.704469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.704496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.704636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.704663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.704806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.704833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.705001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.705030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.705154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.705182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.705341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.705369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.705513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.705538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.705706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.705732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.705852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.705877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.706028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.706053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.706280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.706307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.706466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.706494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.706647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.706673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.706806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.706833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.706975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.707002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.130 [2024-07-22 12:28:10.707131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.130 [2024-07-22 12:28:10.707172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.130 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.707329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.707357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.707506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.707534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.707686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.707712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.707860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.707885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.708019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.708047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.708204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.708233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.708415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.708442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.708631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.708675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.708818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.708842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.709042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.709096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.709283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.709311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.709465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.709493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.709658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.709683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.709800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.709825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.709966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.709991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.710160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.710217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.710373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.710400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.710526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.710553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.710723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.710749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.710891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.710925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.711081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.711121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.711317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.711373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.711508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.711535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.711667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.711695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.711860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.711907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.712104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.712148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.712300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.712344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.712512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.712538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.712661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.712688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.712861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.712905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.713067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.713096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.713306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.713336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.713500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.713526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.713706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.713740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.713928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.713958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.714096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.714124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.714251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.714283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.714436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.714464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.714633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.714678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.714801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.714826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.131 [2024-07-22 12:28:10.714944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.131 [2024-07-22 12:28:10.714969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.131 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.715113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.715139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.715273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.715301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.715450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.715477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.715620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.715665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.715811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.715836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.716008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.716033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.716168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.716196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.716376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.716404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.716562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.716590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.716768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.716794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.716935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.716963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.717095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.717123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.717246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.717273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.717405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.717434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.717586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.717633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.717766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.717792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.717924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.717949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.718118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.718143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.718287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.718311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.718454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.718501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.718634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.718666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.718816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.718842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.719015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.719061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.719243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.719289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.719436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.719461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.719612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.719665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.719826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.719854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.720037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.720065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.720223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.720267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.720426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.720453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.720611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.720663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.720823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.720852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.721009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.721035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.721196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.721223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.721385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.721413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.721564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.721591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.721756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.721970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.722000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.722137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.722334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.132 [2024-07-22 12:28:10.722519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.132 [2024-07-22 12:28:10.722543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.132 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.722691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.722718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.722833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.722860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.722996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.723024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.723184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.723211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.723352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.723377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.723543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.723569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.723716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.723743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.723905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.723934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.724111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.724167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.724330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.724358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.724509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.724536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.724679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.724705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.724828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.724853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.724996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.725021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.725203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.725230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.725381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.725409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.725547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.725572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.725719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.725745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.725890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.725916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.726060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.726086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.726246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.726275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.726467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.726495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.726642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.726686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.726855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.726880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.727046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.727075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.727259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.727287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.727408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.727436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.727625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.727669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.727787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.727812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.727937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.727966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.728140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.728183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.728372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.728422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.728544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.728573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.728721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.728747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.728894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.728942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.729111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.729140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.729337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.729383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.729506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.729532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.729700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.729748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.133 qpair failed and we were unable to recover it. 00:33:03.133 [2024-07-22 12:28:10.729889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.133 [2024-07-22 12:28:10.729931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.730098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.730142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.730311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.730355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.730502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.730530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.730673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.730702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.730926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.730969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.731146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.731173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.731348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.731378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.731567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.731593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.731744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.731769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.731919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.731944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.732111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.732161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.732324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.732351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.732505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.732533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.732691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.732719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.732845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.732871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.733011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.733053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.733214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.733257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.733466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.733491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.733678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.733722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.733889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.733914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.734083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.734108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.734247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.734424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.734451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.734569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.734594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.734725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.734763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.734959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.734988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.735174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.735201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.735419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.735467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.735633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.735659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.735771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.735796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.735912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.735937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.736161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.134 [2024-07-22 12:28:10.736189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.134 qpair failed and we were unable to recover it. 00:33:03.134 [2024-07-22 12:28:10.736347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.736374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.736555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.736593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.736715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.736741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.736864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.736894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.737072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.737110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.737267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.737315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.737437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.737464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.737610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.737641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.737780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.737824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.737970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.737995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.738188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.738232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.738375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.738400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.738519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.738545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.738710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.738741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.738916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.738959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.739127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.739157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.739365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.739414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.739601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.739660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.739799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.739827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.739983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.740011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.740204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.740252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.740406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.740434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.740561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.740586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.740782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.740825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.740984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.741028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.741227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.741278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.741469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.741518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.741657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.741683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.741821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.741849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.742002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.742030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.742242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.742302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.742509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.742565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.742686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.742712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.742865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.742908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.743056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.743083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.743254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.743298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.743445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.743472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.743594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.743625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.743826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.743854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.744063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.744089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.744204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.135 [2024-07-22 12:28:10.744229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.135 qpair failed and we were unable to recover it. 00:33:03.135 [2024-07-22 12:28:10.744345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.744371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.744551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.744576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.744713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.744756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.744929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.744961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.745143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.745171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.745363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.745412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.745536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.745563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.745761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.745800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.745921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.745949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.746107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.746150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.746329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.746356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.746532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.746560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.746685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.746712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.746867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.746893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.747066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.747094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.747324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.747377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.747544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.747572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.747742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.747767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.747899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.747928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.748057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.748087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.748311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.748361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.748540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.748569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.748717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.748744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.748886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.748915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.749056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.749098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.749261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.749286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.749429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.749454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.749624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.749650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.749771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.749796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.749973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.750029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.750228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.750272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.750428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.750454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.750598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.750631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.750772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.750818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.750984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.751028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.751162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.751206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.751375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.751401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.751524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.751549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.751684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.751730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.751901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.751929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.136 [2024-07-22 12:28:10.752079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.136 [2024-07-22 12:28:10.752109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.136 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.752238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.752264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.752394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.752432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.752591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.752624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.752795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.752823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.752954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.752980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.753220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.753272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.753507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.753531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.753677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.753702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.753836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.753862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.753990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.754017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.754169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.754196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.754395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.754444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.754573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.754600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.754749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.754773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.754933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.754961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.755113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.755137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.755343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.755390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.755548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.755576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.755725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.755752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.755906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.755931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.756103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.756130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.756296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.756351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.756539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.756564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.756708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.756734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.756852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.756876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.757147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.757195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.757377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.757404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.757559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.757587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.757778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.757817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.758003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.758046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.758205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.758249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.758424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.758450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.758562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.758589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.758723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.758750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.758896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.758921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.759053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.759097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.759214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.759240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.759384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.759409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.759528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.759553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.759715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.759759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.759897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.137 [2024-07-22 12:28:10.759940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.137 qpair failed and we were unable to recover it. 00:33:03.137 [2024-07-22 12:28:10.760105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.760148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.760273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.760298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.760445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.760472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.760668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.760697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.760857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.760884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.761054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.761103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.761257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.761284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.761484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.761512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.761671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.761697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.761807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.761832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.761993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.762020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.762173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.762200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.762394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.762441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.762581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.762609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.762797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.762822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.762993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.763047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.763233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.763282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.763493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.763519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.763682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.763720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.763847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.763874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.764021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.764066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.764260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.764310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.764491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.764517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.764635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.764661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.764802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.764827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.765039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.765086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.765266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.765317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.765466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.765494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.765704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.765731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.765858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.765883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.766039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.766068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.766228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.766256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.766456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.766484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.766610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.766646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.138 [2024-07-22 12:28:10.766814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.138 [2024-07-22 12:28:10.766840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.138 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.766972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.767001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.767144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.767172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.767333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.767362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.767492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.767517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.767664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.767690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.767858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.767899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.768085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.768118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.768275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.768303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.768462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.768491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.768639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.768665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.768811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.768836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.768993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.769021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.769177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.769205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.769327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.769355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.769536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.769564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.769729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.769756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.769916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.769944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.770100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.770130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.770314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.770355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.770549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.770576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.770747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.770773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.770891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.770918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.771061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.771102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.771274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.771300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.771453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.771478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.771647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.771673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.771796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.771821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.771985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.772014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.772133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.772161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.772313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.772341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.772496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.772524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.772718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.772745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.772865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.772891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.773084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.773113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.773268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.773298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.773465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.773494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.773657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.773682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.773805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.773832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.773964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.773990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.774141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.139 [2024-07-22 12:28:10.774166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.139 qpair failed and we were unable to recover it. 00:33:03.139 [2024-07-22 12:28:10.774283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.774325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.774487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.774517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.774717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.774743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.774892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.774918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.775071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.775115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.775259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.775284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.775454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.775482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.775623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.775651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.775789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.775816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.775964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.775988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.776137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.776164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.776300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.776326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.776446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.776486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.776648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.776692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.776808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.776834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.777010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.777050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.777184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.777212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.777378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.777404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.777551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.777575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.777746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.777772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.777933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.777958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.778099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.778140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.778309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.778336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.778473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.778498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.778673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.778699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.778870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.779083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.779111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.779300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.779330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.779462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.779507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.779671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.779699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.779842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.779868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.780021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.780047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.780198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.780240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.780388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.780414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.780530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.780554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.780716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.780742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.780886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.780912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.781103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.781132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.781289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.781315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.781488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.781513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.781679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.140 [2024-07-22 12:28:10.781708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.140 qpair failed and we were unable to recover it. 00:33:03.140 [2024-07-22 12:28:10.781849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.781878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.782022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.782046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.782189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.782216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.782396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.782423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.782569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.782593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.782757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.782786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.782982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.783011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.783162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.783190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.783375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.783402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.783560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.783587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.783795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.783821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.783990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.784019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.784181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.784209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.784375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.784400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.784587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.784622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.784829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.784857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.785021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.785046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.785185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.785211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.785367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.785392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.785537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.785563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.785738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.785764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.785876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.785918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.786063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.786087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.786209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.786235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.786416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.786442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.786594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.786632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.786759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.786784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.786926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.786952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.787071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.787097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.787242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.787286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.787474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.787502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.787670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.787698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.787892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.787934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.788111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.788141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.788306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.788331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.788474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.788515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.788684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.788710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.788884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.788910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.789073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.789100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.789229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.789257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.789396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.141 [2024-07-22 12:28:10.789423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.141 qpair failed and we were unable to recover it. 00:33:03.141 [2024-07-22 12:28:10.789607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.789643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.789811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.789836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.789960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.789984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.790111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.790151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.790280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.790307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.790453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.790478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.790619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.790644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.790793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.790817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.790961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.790986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.791144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.791172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.791321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.791347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.791461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.791486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.791631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.791656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.791779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.791804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.791980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.792005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.792172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.792200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.792359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.792387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.792550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.792575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.792727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.792757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.792923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.792950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.793118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.793303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.793330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.793469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.793497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.793658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.793683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.793836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.793861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.794003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.794029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.794171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.794195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.794336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.794363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.794530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.794555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.794700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.794725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.794864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.794904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.795075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.795100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.795252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.795277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.795439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.795469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.795634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.795677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.795819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.795844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.795985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.796028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.796157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.796185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.796327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.796352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.796495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.796535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.796682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.796707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.142 [2024-07-22 12:28:10.796853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.142 [2024-07-22 12:28:10.796879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.142 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.796998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.797022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.797167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.797191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.797375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.797400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.797565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.797593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.797795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.797821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.797965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.797990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.798179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.798207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.798360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.798389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.798576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.798601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.798765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.798790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.798916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.798942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.799114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.799139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.799284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.799325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.799486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.799513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.799660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.799687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.799832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.799858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.800022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.800047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.800175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.800201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.800345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.800370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.800519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.800547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.800713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.800739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.800881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.800922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.801085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.801112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.801251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.801276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.801420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.801445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.801590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.801629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.801772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.801798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.801951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.801976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.802148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.802175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.802304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.802329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.802474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.802498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.802716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.802741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.802860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.802884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.803005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.143 [2024-07-22 12:28:10.803031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.143 qpair failed and we were unable to recover it. 00:33:03.143 [2024-07-22 12:28:10.803157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.803183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.803300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.803325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.803464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.803506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.803682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.803707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.803855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.803880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.804028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.804056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.804191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.804218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.804381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.804406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.804517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.804543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.804717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.804744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.804878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.804907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.805056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.805081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.805284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.805312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.805459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.805485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.805601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.805632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.805801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.805827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.805966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.805993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.806151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.806179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.806325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.806353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.806513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.806541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.806740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.806765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.806934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.806964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.807140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.807169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.807330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.807355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.807491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.807515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.807713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.807739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.807851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.807877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.808047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.808089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.808241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.808440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.808465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.808686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.808711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.808824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.808849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.808966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.808990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.809129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.809153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.809273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.809297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.809440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.809465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.809639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.809664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.809808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.809832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.810007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.810035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.810206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.810230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.810374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.810399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.144 qpair failed and we were unable to recover it. 00:33:03.144 [2024-07-22 12:28:10.810567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.144 [2024-07-22 12:28:10.810610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.810761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.810787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.810919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.810961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.811126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.811150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.811272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.811297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.811414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.811440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.811606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.811642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.811779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.811805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.811951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.811993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.812141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.812168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.812331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.812360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.812531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.812556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.812718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.812743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.812860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.812885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.813013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.813038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.813175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.813199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.813377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.813402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.813538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.813565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.813758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.813782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.813898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.813922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.814066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.814091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.814261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.814288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.814422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.814447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.814583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.814607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.814733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.814758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.814895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.814920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.815038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.815080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.815208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.815235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.815381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.815406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.815547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.815731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.815756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.815878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.815903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.816015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.816039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.817320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.817355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.817534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.817707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.817733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.817877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.817901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.818070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.818099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.818289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.818336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.818493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.818522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.818684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.145 [2024-07-22 12:28:10.818711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.145 qpair failed and we were unable to recover it. 00:33:03.145 [2024-07-22 12:28:10.818831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.818855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.818998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.819023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.819232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.819258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.819392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.819420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.819618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.819644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.820410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.820441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.820595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.820626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.820753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.820779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.820897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.820921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.821057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.821082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.821296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.821505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.821532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.821653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.821679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.821804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.821831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.822005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.822030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.822186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.822239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.822402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.822452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.822653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.822681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.822804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.822831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.822973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.823001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.823141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.823167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.823295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.823333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.823545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.823587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.823775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.823805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.823923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.823949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.824156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.824204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.824344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.824369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.824488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.824513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.824699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.824725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.824832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.824856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.824960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.824984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.825156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.825183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.825317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.825342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.825480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.825506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.825720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.825746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.825864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.825888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.826031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.826073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.826234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.826261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.826418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.826443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.826584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.826634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.826773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.146 [2024-07-22 12:28:10.826798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.146 qpair failed and we were unable to recover it. 00:33:03.146 [2024-07-22 12:28:10.826936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.826962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.827105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.827129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.827276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.827300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.827446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.827470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.827581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.827606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.827756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.827780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.827897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.827922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.828064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.828106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.828255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.828282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.828427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.828451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.828634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.828661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.828775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.828799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.828916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.828941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.829080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.829121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.829255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.829283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.829417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.829459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.829611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.829783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.829807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.829916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.829941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.830056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.830080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.830266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.830294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.830457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.830482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.830646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.830675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.830806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.830835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.830989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.831015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.831180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.831212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.831415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.831443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.831599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.831641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.831780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.831805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.832000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.832033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.832206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.832251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.832442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.832468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.832632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.832660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.832817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.832841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.832976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.147 [2024-07-22 12:28:10.833002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.147 qpair failed and we were unable to recover it. 00:33:03.147 [2024-07-22 12:28:10.833159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.833191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.833334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.833362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.834172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.834204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.834378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.834403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.834548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.834590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.834740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.834764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.835462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.835494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.835679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.835705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.835828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.835854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.835999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.836041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.836204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.836231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.836374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.836399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.836568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.836611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.836758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.836783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.836908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.836934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.837050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.837078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.837225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.837249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.837425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.837450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.837640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.837684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.837817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.837842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.837989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.838013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.838193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.838243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.838425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.838454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.838572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.838596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.838752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.838776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.838893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.838918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.839088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.839113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.839283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.839307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.839440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.839477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.839623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.839663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.839796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.839823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.839941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.839967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.840134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.840374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.840425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.840547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.840573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.840712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.840751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.840906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.840938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.841129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.841178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.841324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.841366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.841516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.841543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.841728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.148 [2024-07-22 12:28:10.841753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.148 qpair failed and we were unable to recover it. 00:33:03.148 [2024-07-22 12:28:10.841915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.841952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.842167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.842222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.842374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.842423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.842588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.842626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.842793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.842819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.842970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.842998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.843204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.843242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.843387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.843415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.843540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.843568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.843718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.843744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.843869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.843895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.844093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.844122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.844308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.844336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.844492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.844520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.844693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.844719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.844839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.844865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.845008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.845051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.845259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.845286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.845424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.845450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.845597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.845632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.845768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.845794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.845965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.845994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.846161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.846186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.846361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.846390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.846525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.846550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.846669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.846694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.846831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.846856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.847014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.847042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.847215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.847259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.847483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.847532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.847690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.847729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.847853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.847880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.848082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.848110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.848264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.848306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.848430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.848455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.848592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.848623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.848762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.848806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.848939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.848983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.849147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.849191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.849338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.849363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.149 [2024-07-22 12:28:10.849517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.149 [2024-07-22 12:28:10.849542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.149 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.849683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.849735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.849883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.849926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.850067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.850092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.850262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.850313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.850459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.850486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.850608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.850643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.850802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.850846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.850970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.850998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.851139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.851180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.851322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.851347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.851497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.851522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.851690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.851728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.851854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.851880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.852052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.852078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.852207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.852234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.852351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.852378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.852527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.852551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.852712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.852741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.852884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.852909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.853057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.853085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.853250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.853278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.853399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.853427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.853582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.853610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.853756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.853783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.853943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.853972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.854099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.854128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.854352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.854381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.854526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.854551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.854698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.854724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.854864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.854892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.855022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.855051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.855172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.855200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.855352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.855380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.855539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.855567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.855718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.855744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.855863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.855889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.856005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.856030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.856200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.856228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.856405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.856433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.856641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.150 [2024-07-22 12:28:10.856683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.150 qpair failed and we were unable to recover it. 00:33:03.150 [2024-07-22 12:28:10.856802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.856831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.856980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.857005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.857162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.857190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.857340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.857368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.857526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.857554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.857716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.857742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.857868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.857910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.858041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.858066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.858210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.858252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.858412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.858441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.858578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.858604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.858728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.858755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.858878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.858903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.859117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.859157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.859335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.859364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.859496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.859524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.859656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.859682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.859803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.859829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.859999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.860041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.860183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.860226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.860380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.860407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.860553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.860581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.860717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.860743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.860864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.860906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.861059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.861092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.861281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.861322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.861464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.861490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.861623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.861662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.861789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.861817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.861946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.861988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.862174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.862203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.862396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.862425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.862585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.862622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.862769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.862794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.151 [2024-07-22 12:28:10.862959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.151 [2024-07-22 12:28:10.862988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.151 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.863180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.863208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.863359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.863408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.863543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.863569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.863724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.863750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.863869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.863911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.864080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.864111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.864227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.864268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.864444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.864485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.864706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.864733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.864856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.864882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.864999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.865026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.865198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.865223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.865366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.865394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.865546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.865574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.865719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.865746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.865859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.865886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.866046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.866074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.866265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.866291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.866410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.866436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.866562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.866589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.866737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.866763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.866880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.866922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.867113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.867138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.867315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.867341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.867505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.867533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.867711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.867737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.867905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.867931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.868118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.868146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.868304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.868334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.868496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.868522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.868677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.868720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.868895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.868921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.869072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.869098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.869238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.869264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.869411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.869452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.869593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.869625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.869801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.869844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.870012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.870038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.870208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.870234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.870401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.152 [2024-07-22 12:28:10.870428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.152 qpair failed and we were unable to recover it. 00:33:03.152 [2024-07-22 12:28:10.870586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.870620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.870758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.870786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.870929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.870971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.871099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.871127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.871263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.871290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.871435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.871482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.871648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.871679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.871839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.871865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.872008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.872052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.872233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.872279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.872420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.872445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.872567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.872594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.872765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.872794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.872962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.872987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.873105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.873150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.873314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.873343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.873515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.873542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.873701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.873731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.873863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.873892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.874064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.874090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.874285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.874314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.874467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.874495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.874664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.874691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.874809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.874854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.875038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.875066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.875203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.875229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.875357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.875383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.875552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.875580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.875746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.875772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.875918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.875944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.876083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.876109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.876223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.876250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.876443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.876472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.876630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.876659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.876803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.876830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.876946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.876971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.877150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.877178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.877311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.877336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.877455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.877480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.877593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.877634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.877780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.153 [2024-07-22 12:28:10.877805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.153 qpair failed and we were unable to recover it. 00:33:03.153 [2024-07-22 12:28:10.877913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.877955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.878074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.878102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.878269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.878296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.878456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.878484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.878641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.878694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.878870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.878896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.879062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.879090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.879248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.879277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.879434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.879461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.879627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.879656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.879825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.879850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.880017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.880043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.880203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.880232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.880415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.880443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.880580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.880606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.880733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.880759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.880897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.880923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.881073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.881099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.881246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.881272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.881444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.881472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.881630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.881656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.881774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.881800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.881942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.881968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.882114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.882139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.882329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.882358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.882522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.882548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.882692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.882718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.882838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.882881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.883065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.883094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.883231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.883258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.883416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.883459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.883624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.883658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.883801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.883826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.883967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.883992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.884189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.884217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.884381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.884407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.884556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.884581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.884735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.884761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.884928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.884954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.885114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.885140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.885284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.154 [2024-07-22 12:28:10.885326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.154 qpair failed and we were unable to recover it. 00:33:03.154 [2024-07-22 12:28:10.885493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.885518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.885705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.885734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.885887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.885915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.886078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.886104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.886223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.886250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.886392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.886419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.886542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.886568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.886702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.886728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.886850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.886878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.887036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.887064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.887207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.887249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.887430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.887459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.887621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.887647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.887836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.887864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.888034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.888059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.888204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.888228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.888340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.888367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.888514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.888540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.888731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.888757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.888898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.888940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.889100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.889129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.889290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.889316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.889505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.889533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.889701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.889730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.889875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.889900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.890019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.890046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.890246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.890275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.890428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.890455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.890640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.890683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.890828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.890855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.891011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.891041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.891226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.891254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.891387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.891414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.891576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.891602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.891768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.891798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.891992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.892017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.892153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.892179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.892363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.892391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.892547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.892574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.892741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.892767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.892885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.155 [2024-07-22 12:28:10.892927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.155 qpair failed and we were unable to recover it. 00:33:03.155 [2024-07-22 12:28:10.893086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.893114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.893305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.893330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.893476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.893501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.893652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.893697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.893866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.894051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.894081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.894243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.894271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.894436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.894461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.894631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.894656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.894800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.894841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.894972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.894999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.895137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.895162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.895369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.895404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.895559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.895586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.895717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.895744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.895892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.895917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.896072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.896097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.896251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.896279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.896464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.896493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.896664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.896690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.896811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.896836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.896948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.896974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.897143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.897297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.897325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.897481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.897509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.897668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.897694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.897843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.897868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.898052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.898081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.898222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.898246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.898392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.898437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.898588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.898623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.898734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.898759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.898903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.898928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.899072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.899100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.156 [2024-07-22 12:28:10.899291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.156 [2024-07-22 12:28:10.899316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.156 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.899437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.899463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.899598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.899630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.899770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.899795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.899968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.899997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.900193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.900218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.900334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.900359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.900477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.900502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.900670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.900696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.900849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.900874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.900987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.901012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.901161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.901189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.901334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.901359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.901528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.901553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.901766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.901798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.901942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.901967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.902106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.902147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.902312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.902338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.902481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.902506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.902668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.902698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.902898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.902924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.903066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.903092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.903256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.903284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.903404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.903433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.903592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.903627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.903764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.903790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.903938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.903963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.904143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.904168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.904305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.904331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.904515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.904543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.904683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.904709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.904851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.904877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.905049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.905079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.905245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.905270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.905428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.905457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.905596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.905633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.905806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.905832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.905998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.906027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.906230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.906280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.906450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.906477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.906635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.906662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.157 [2024-07-22 12:28:10.906851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.157 [2024-07-22 12:28:10.906879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.157 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.907018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.907046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.907195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.907238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.907422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.907450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.907602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.907634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.907801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.907830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.907985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.908014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.908179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.908205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.908396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.908425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.908618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.908646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.908788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.908814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.908962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.908987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.909167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.909192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.909305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.909330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.909476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.909504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.909654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.909683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.909855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.909881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.910072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.910101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.910270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.910297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.910456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.910484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.910625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.910670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.910784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.910809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.910929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.910954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.911139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.911167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.911293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.911322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.911466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.911491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.911638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.911664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.911850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.911878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.912021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.912046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.912186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.912212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.912361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.912404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.912579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.912606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.912767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.912793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.912944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.912985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.913158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.913196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.913344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.913371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.913545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.913574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.913754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.913781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.913972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.914001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.914130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.914161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.914315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.914340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.158 qpair failed and we were unable to recover it. 00:33:03.158 [2024-07-22 12:28:10.914485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.158 [2024-07-22 12:28:10.914512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.914708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.914738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.914878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.914903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.915049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.915075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.915257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.915286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.915452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.915477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.915670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.915699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.915868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.915897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.916060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.916086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.916228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.916270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.916432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.916462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.916598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.916629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.916750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.916776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.916936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.916963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.917134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.917159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.917319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.917347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.917507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.917537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.917716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.917742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.917888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.917913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.918058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.918083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.918242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.918267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.918412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.918437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.918548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.918574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.918736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.918762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.918909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.918936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.919119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.919144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.919311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.919336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.919463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.919489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.919646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.919689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.919807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.919833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.919978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.920003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.920145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.920170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.920316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.920342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.920463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.920494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.920672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.920699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.920846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.920871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.921059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.921087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.921264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.921289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.921433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.921458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.921576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.921602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.921732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.921757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.159 qpair failed and we were unable to recover it. 00:33:03.159 [2024-07-22 12:28:10.921877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.159 [2024-07-22 12:28:10.921902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.922043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.922068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.922229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.922258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.922409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.922436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.922596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.922628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.922788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.922814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.922942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.922968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.923110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.923136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.923286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.923312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.923421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.923447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.923578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.923605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.923738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.923764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.923889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.923914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.924030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.924057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.924203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.924230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.924342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.924384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.924514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.924543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.924716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.924742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.924868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.924894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.925062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.925091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.925276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.925304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.925462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.925488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.925606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.925656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.925791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.925820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.926026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.926052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.926177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.926202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.926424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.926466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.926608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.926639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.926755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.926782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.926913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.926941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.927169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.927194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.927318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.927343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.927462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.927493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.927700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.927726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.160 qpair failed and we were unable to recover it. 00:33:03.160 [2024-07-22 12:28:10.927872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.160 [2024-07-22 12:28:10.927900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.928024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.928052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.928215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.928241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.928357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.928383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.928496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.928521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.928671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.928697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.928812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.928857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.929040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.929068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.929204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.929230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.929372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.929414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.929568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.929597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.929737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.929763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.929887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.929913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.930054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.930083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.930242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.930268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.930414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.930457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.930626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.930652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.930779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.930804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.930928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.930954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.931122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.931150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.931295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.931322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.931438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.931463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.931605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.931657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.931776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.931802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.932058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.932248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.932415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.932566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.932710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.932858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.932974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.933145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.933351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.933491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.933683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.933836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.933969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.933994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.934818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.934852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.935046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.935076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.935246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.935275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.935471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.161 [2024-07-22 12:28:10.935498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.161 qpair failed and we were unable to recover it. 00:33:03.161 [2024-07-22 12:28:10.935645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.935671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.935830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.935859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.936046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.936074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.936234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.936259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.936400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.936444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.936600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.936636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.936781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.936807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.936950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.936993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.937146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.937176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.937307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.937333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.937476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.937502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.937700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.937727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.937833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.937859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.938000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.938026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.938163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.938204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.938351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.938378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.938499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.938524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.938675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.938701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.938854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.938880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.939048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.939073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.939216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.939245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.939401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.939427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.939586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.939620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.939784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.939810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.939931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.939959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.940132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.940158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.940278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.940303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.940458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.940484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.940629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.940672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.940802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.940832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.940998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.941025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.941145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.941172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.941315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.941341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.941519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.941544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.941654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.941680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.941820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.941849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.942015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.942042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.942164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.942212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.942348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.942373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.942530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.942556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.942675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.162 [2024-07-22 12:28:10.942717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.162 qpair failed and we were unable to recover it. 00:33:03.162 [2024-07-22 12:28:10.942873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.942902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.943093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.943119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.943280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.943309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.943431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.943460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.943631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.943658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.943777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.943803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.943984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.944027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.944187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.944212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.944325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.944350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.944550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.944579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.944735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.944891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.945062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.945105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.945297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.945323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.945498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.945526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.945723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.945749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.945887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.945913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.946021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.946063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.946209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.946234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.946391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.946416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.946582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.946611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.946806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.946834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.946997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.947023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.947198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.947223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.947400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.947425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.947592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.947628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.947843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.947868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.947986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.948012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.948131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.948157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.948294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.948337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.948494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.948522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.948679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.948706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.948849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.948893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.949046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.949074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.949242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.949268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.949407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.949449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.949608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.949646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.949811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.949837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.949958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.949984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.163 [2024-07-22 12:28:10.950125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.163 [2024-07-22 12:28:10.950151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.163 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.950260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.950286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.950457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.950499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.950666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.950695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.950832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.950858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.951004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.951046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.951183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.951209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.951377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.951403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.951531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.951558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.951695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.951720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.951838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.951864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.952017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.952042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.952188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.952213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.952359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.952384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.952556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.952584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.952751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.952780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.952940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.952965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.953155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.953183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.953311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.953339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.953507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.953532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.953695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.953723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.953880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.953908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.954067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.954093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.954259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.954286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.954474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.954512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.954666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.954694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.954864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.954889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.955076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.955109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.955290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.955316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.955485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.955513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.955680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.955706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.955851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.955877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.956066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.956093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.956261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.956287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.956452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.164 [2024-07-22 12:28:10.956477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.164 qpair failed and we were unable to recover it. 00:33:03.164 [2024-07-22 12:28:10.956648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.956677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.956862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.956890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.957059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.957089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.957208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.957234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.957359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.957384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.957542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.957570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.957715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.957740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.957886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.957933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.958126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.958151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.958309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.958337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.958488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.958531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.958732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.958760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.958932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.958960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.959104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.959131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.959275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.959300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.959416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.959441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.959623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.959650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.959829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.959854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.959991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.960015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.960150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.960191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.960339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.960366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.960531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.960559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.960721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.960751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.960901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.960928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.961171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.961200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.961358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.961387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.961543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.961569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.961715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.961741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.961887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.961912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.962123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.962148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.962321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.962351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.962532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.962560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.962712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.962738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.962860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.962885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.963057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.963085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.963247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.963273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.963394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.963420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.963596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.963631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.963774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.963799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.963944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.963986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.964166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.165 [2024-07-22 12:28:10.964217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.165 qpair failed and we were unable to recover it. 00:33:03.165 [2024-07-22 12:28:10.964360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.964387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.964538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.964568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.964742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.964768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.964976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.965122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.965268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.965439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.965585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.965788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.965961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.965986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.966101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.966127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.966272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.966298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.966440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.966464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.966583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.966632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.966828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.966853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.966981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.967007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.967196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.967224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.967357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.967554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.967579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.967733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.967760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.967881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.967908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.968022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.968049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.968190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.968217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.968328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.968370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.968525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.968553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.968687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.968713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.968858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.968883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.969026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.969051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.969215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.969244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.969395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.166 [2024-07-22 12:28:10.969423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.166 qpair failed and we were unable to recover it. 00:33:03.166 [2024-07-22 12:28:10.969619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.969645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.969832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.969860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.970037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.970082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.970277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.970302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.970491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.970519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.970674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.970703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.970849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.970874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.971060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.971088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.971305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.971354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.971499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.971524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.971689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.971715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.971887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.971915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.972085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.972110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.972229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.972254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.972394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.972422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.972578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.972603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.972733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.972774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.972905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.972933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.973099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.973125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.973274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.973299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.973449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.973474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.973625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.973651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.973793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.973821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.973963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.973989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.974110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.974137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.974288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.974313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.974434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.974460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.974603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.974635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.974756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.974783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.974957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.974999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.975162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.975187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.975375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.975403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.975556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.975584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.975828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.975854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.976028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.976056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.976246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.976274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.976434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.976462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.976627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.976669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.976814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.976844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.977014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.977039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.977206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.167 [2024-07-22 12:28:10.977234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.167 qpair failed and we were unable to recover it. 00:33:03.167 [2024-07-22 12:28:10.977366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.977393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.977589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.977618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.977783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.977810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.977963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.977991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.978153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.978177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.978301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.978325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.978441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.978466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.978579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.978603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.978728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.978753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.978872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.978896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.979036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.979060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.979214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.979239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.979390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.979414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.979560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.979584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.979710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.979736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.979903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.979927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.980043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.980067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.980184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.980208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.980327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.980352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.980521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.980546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.980691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.980716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.980828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.980852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.981023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.981048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.981161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.981185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.981308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.981333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.981506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.981531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.981689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.981716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.981853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.981880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.982028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.982053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.982204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.982229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.982399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.982427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.982611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.982662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.982791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.982817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.983011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.983038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.983235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.168 [2024-07-22 12:28:10.983259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.168 qpair failed and we were unable to recover it. 00:33:03.168 [2024-07-22 12:28:10.983421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.983451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.983635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.983661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.983805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.983833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.983953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.983979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.984146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.984187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.984319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.984344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.984486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.984510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.984695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.984723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.984887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.984912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.985071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.985098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.985232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.985258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.985426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.985450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.985606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.985639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.985808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.985834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.985974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.985999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.986138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.986163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.986282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.986307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.986482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.986508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.986679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.986723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.986865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.986890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.987061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.987087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.987270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.987298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.987464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.987488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.987610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.987639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.987779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.987804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.988005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.988029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.988173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.988196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.988389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.988416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.988549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.988577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.988732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.988758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.988899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.988923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.989078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.989103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.989274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.989298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.169 [2024-07-22 12:28:10.989419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.169 [2024-07-22 12:28:10.989460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.169 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.989624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.989667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.989787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.989810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.989950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.989991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.990150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.990178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.990352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.990377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.990536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.990565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.990706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.990734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.990878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.990902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.991049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.991098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.991300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.991325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.991480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.991504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.991625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.991668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.991817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.992004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.992029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.992184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.992210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.992329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.992354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.992496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.992521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.992715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.992742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.992904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.992931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.993093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.993118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.993281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.993309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.993460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.993487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.993662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.993688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.993875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.993903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.994041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.994068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.994200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.994224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.994371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.994412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.994568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.994595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.994742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.994766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.994933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.994958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.995097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.995124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.995259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.995283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.995393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.995417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.995563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.995590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.995736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.995761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.170 [2024-07-22 12:28:10.995881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.170 [2024-07-22 12:28:10.995907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.170 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.996084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.996111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.996251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.996277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.996424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.996450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.996643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.996671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.996836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.996861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.997017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.997042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.997183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.997207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.997319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.997343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.997540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.997568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.997720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.997745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.997890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.997914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.998072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.998099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.998227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.998259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.998423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.998448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.998606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.998639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.998777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.998802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.998950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.998975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.999137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.999165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.999327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.999351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.999519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.999544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.999708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.999736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:10.999885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:10.999912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.000083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.000108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.000221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.000261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.000412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.000440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.000639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.000666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.000874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.000899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.001045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.001070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.001239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.001264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.001388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.001416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.001600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.001633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.001751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.001944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.001985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.002162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.002187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.002315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.002340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.002517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.002542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.002662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.002686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.002837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.002862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.003051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.003079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.171 qpair failed and we were unable to recover it. 00:33:03.171 [2024-07-22 12:28:11.003255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.171 [2024-07-22 12:28:11.003282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.003437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.003462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.003616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.003658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.003818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.003846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.004013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.004038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.004229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.004256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.004413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.004440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.004597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.004628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.004756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.004797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.004926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.004953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.005120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.005147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.005290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.005332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.005475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.005499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.005645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.005675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.005843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.005868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.006055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.006083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.006222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.006248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.006412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.006436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.006553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.006578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.006733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.006758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.006882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.006906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.007054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.007079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.007193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.007218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.007361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.007386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.007534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.007558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.007755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.007781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.007938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.007966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.008148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.008177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.008348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.008373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.008511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.008536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.008681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.008854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.008880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.009076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.009104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.009234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.009261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.009426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.009451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.009589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.009640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.009798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.009826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.172 [2024-07-22 12:28:11.010015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.172 [2024-07-22 12:28:11.010040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.172 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.010204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.010231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.010362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.010390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.010587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.010619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.010791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.010819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.010936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.010965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.011130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.011156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.011281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.011323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.011477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.011507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.011651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.011677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.011815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.011855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.011979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.012007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.012145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.012170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.012319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.012344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.012463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.012489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.012636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.012662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.012824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.012857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.012989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.013017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.013206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.013231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.013347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.013372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.013517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.013542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.013716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.013741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.013909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.013934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.014053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.014078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.014219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.014245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.014395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.014421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.014565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.014590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.014715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.014740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.014862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.014887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.015059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.015087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.015232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.015257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.015378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.015402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.015544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.015570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.015740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.015767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.015952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.015978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.016122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.016147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.016354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.016379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.173 [2024-07-22 12:28:11.016537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.173 [2024-07-22 12:28:11.016565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.173 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.016759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.016785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.016931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.016956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.017070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.017096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.017242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.017268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.017411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.017436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.017564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.017589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.017766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.017809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.017964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.017989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.018184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.018211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.018370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.018398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.018556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.018581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.018724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.018750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.018875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.018900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.019042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.019067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.019231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.019260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.019408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.019435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.019577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.019603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.019729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.019755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.019893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.019922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.020064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.020088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.020200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.020243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.020410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.020437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.020582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.020606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.020758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.020800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.020953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.020981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.021144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.174 [2024-07-22 12:28:11.021169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.174 qpair failed and we were unable to recover it. 00:33:03.174 [2024-07-22 12:28:11.021330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.021359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.021518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.021546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.021686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.021713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.021855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.021879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.022032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.022062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.022204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.022229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.022354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.022379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.022571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.022599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.022765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.022791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.022953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.022980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.023139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.023168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.023306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.023331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.023471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.023496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.023673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.023702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.023839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.023865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.024030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.024055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.024253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.024278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.024391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.024417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.024564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.024590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.024785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.024811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.024951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.024976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.025097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.025124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.025293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.025321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.025485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.025510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.025682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.025712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.025899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.025926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.026074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.026100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.026288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.026316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.026470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.026498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.026691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.026717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.026842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.026867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.027011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.027053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.027212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.027247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.027384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.027412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.027608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.027649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.027835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.027862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.028006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.028034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.028193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.028222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.028376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.028402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.028591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.028624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.175 [2024-07-22 12:28:11.028806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.175 [2024-07-22 12:28:11.028834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.175 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.029013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.029060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.029225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.029251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.029413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.029441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.029608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.029639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.029760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.029786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.029908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.029933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.030050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.030074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.030250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.030279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.030432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.030459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.030629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.030655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.030797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.030822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.030967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.030992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.031166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.031193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.031360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.031386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.031502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.031529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.031677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.031703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.031844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.031872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.032043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.032068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.032214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.032239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.032389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.032414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.032561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.032589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.032761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.032786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.032925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.032949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.033069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.033094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.033227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.033256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.033424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.033450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.033589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.033619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.033763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.033788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.033934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.033959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.034105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.034130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.034247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.034274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.034398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.034427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.034577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.034605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.034776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.034802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.176 [2024-07-22 12:28:11.034916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.176 [2024-07-22 12:28:11.034957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.176 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.035121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.035149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.035346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.035374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.035542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.035568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.035685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.035710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.035853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.035879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.036002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.036026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.036199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.036224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.036364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.036393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.036553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.036580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.036719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.036747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.036883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.036909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.037074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.037098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.037241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.037265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.037407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.037432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.037610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.037661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.037806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.037830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.037948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.037973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.038145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.038170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.038293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.038318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.038491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.038516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.038635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.038661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.038783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.038807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.038923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.038949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.039118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.039144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.039285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.039310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.039434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.039461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.039604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.039639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.039764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.039790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.039939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.039964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.040135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.040160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.040364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.040389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.040533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.040558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.040716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.040742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.040872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.040896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.041040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.041065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.041206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.041231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.041349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.041379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.041555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.041583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.041727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.041752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.041868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.041893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.042094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.042122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.042289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.042316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.042433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.042459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.042603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.042637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.042763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.042788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.042989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.490 [2024-07-22 12:28:11.043016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.490 qpair failed and we were unable to recover it. 00:33:03.490 [2024-07-22 12:28:11.043177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.043201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.043309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.043335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.043497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.043524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.043683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.043711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.043879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.043905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.044046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.044073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.044226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.044267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.044418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.044446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.044584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.044609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.044758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.044798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.044933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.044962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.045108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.045136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.045303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.045328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.045492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.045520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.045699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.045724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.045927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.045979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.046155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.046178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.046327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.046353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.046539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.046567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.046729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.046757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.046917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.046942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.047128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.047156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.047317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.047345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.047501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.047528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.047749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.047775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.047929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.047957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.048141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.048168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.048297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.048339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.048508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.048533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.048662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.048689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.048833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.048863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.049065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.049116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.049278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.049302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.049425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.049450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.049603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.049635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.049812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.049837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.049960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.049985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.050135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.050178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.050338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.050365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.050505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.050530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.050655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.050681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.050848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.050890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.051017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.051045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.051290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.051318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.051486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.051511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.051703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.051731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.051916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.051944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.052138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.052187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.052328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.052354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.052535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.052563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.052731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.052757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.052900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.491 [2024-07-22 12:28:11.052925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.491 qpair failed and we were unable to recover it. 00:33:03.491 [2024-07-22 12:28:11.053097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.053123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.053239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.053264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.053443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.053468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.053618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.053644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.053758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.053784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.053904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.053930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.054054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.054079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.054234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.054262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.054490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.054515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.054668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.054694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.054820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.054861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.055055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.055104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.055263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.055288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.055508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.055534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.055707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.055736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.055874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.055902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.056041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.056066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.056185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.056210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.056365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.056398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.056532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.056562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.056708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.056734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.056880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.056906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.057049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.057077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.057251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.057279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.057444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.057469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.057576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.057602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.057759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.057785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.057955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.057996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.058165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.058190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.058337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.058503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.058530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.058698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.058724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.058870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.058895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.059083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.059111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.059279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.059307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.059438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.059468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.059641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.059667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.059813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.059838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.060003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.060031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.060155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.060183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.060323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.060349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.060488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.060513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.060690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.060719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.060957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.060985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.061154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.061179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.061337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.061363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.061522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.061550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.061740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.061768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.061908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.061933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.062072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.062097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.062245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.062272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.062437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.062462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.062640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.062666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.062807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.492 [2024-07-22 12:28:11.062834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.492 qpair failed and we were unable to recover it. 00:33:03.492 [2024-07-22 12:28:11.063006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.063033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.063187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.063217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.063403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.063429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.063622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.063651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.063804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.063838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.063992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.064046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.064199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.064224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.064475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.064500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.064632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.064661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.064787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.064814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.064946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.064971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.065159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.065186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.065345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.065373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.065557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.065585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.065733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.065760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.065907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.065931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.066116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.066144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.066345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.066370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.066562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.066590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.066733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.066759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.066931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.066971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.067153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.067216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.067360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.067385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.067528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.067554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.067720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.067745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.067862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.067888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.068032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.068224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.068251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.068382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.068410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.068568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.068596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.068748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.068772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.068895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.068924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.069093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.069123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.069281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.069309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.069448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.069472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.069656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.069699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.069861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.069888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.070081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.070105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.070244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.070269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.070413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.070456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.070618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.070645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.070826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.070851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.071030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.071054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.493 qpair failed and we were unable to recover it. 00:33:03.493 [2024-07-22 12:28:11.071217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.493 [2024-07-22 12:28:11.071244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.071409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.071434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.071583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.071642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.071837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.071862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.072054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.072081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.072240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.072267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.072462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.072487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.072636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.072662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.072806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.072831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.072949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.072973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.073141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.073169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.073335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.073360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.073505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.073530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.073687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.073730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.073883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.073911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.074107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.074133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.074296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.074323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.074449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.074638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.074666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.074823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.074849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.075036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.075062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.075222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.075250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.075406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.075434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.075575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.075599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.075748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.075773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.075885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.075912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.076057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.076082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.076262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.076286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.076406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.076434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.076583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.076609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.076787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.076815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.076986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.077011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.077171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.077199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.077332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.077359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.077505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.077533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.077722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.077748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.077885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.077912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.078076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.078103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.078251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.078276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.078479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.078504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.078694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.078721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.078849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.078877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.079041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.079069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.079201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.079225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.079342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.079367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.079540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.079568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.079734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.079761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.079954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.079979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.080141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.080169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.080351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.080392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.080506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.080530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.080739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.080921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.080949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.081078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.081106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.494 qpair failed and we were unable to recover it. 00:33:03.494 [2024-07-22 12:28:11.081260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.494 [2024-07-22 12:28:11.081287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.081448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.081476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.081677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.081703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.081846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.081871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.082013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.082038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.082157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.082284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.082309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.082477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.082502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.082669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.082697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.082867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.082893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.083054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.083082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.083284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.083310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.083454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.083480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.083658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.083683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.083859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.083892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.084022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.084050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.084236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.084263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.084412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.084439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.084637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.084665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.084828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.084855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.085003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.085028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.085178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.085346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.085371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.085553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.085578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.085736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.085761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.085883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.085910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.086099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.086127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.086268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.086293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.086439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.086465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.086658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.086684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.086800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.086825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.086970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.086996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.087137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.087162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.087342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.087368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.087529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.087557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.087701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.087727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.087863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.087887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.088029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.088054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.088175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.088200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.088346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.088371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.088565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.088593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.088752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.088777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.088957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.089000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.089167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.089192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.089339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.089364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.089507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.089532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.089696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.089723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.089880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.089907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.090070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.090098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.090237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.090262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.090408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.090433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.090607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.090643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.090775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.090804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.090977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.495 [2024-07-22 12:28:11.091003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.495 qpair failed and we were unable to recover it. 00:33:03.495 [2024-07-22 12:28:11.091161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.091193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.091369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.091395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.091537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.091561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.091722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.091747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.091942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.091969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.092091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.092119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.092246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.092274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.092417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.092442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.092584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.092609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.092740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.092765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.092930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.092955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.093147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.093172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.093328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.093356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.093520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.093546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.093716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.093760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.093893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.093925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.094098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.094138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.094285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.094313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.094437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.094465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.094624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.094651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.094806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.094830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.094970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.094998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.095170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.095195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.095315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.095351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.095503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.095545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.095700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.095727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.095849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.095874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.096028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.096053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.096202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.096227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.096361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.096386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.096574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.096602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.096771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.096797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.096911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.096937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.097137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.097165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.097334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.097359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.097526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.097552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.097727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.097757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.097917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.097942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.098109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.098134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.098336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.098361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.098519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.098552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.098688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.098717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.098879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.098917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.099103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.099129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.099287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.099315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.099470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.496 qpair failed and we were unable to recover it. 00:33:03.496 [2024-07-22 12:28:11.099630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.496 [2024-07-22 12:28:11.099660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.099824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.099849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.100007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.100036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.100218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.100245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.100404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.100432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.100598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.100633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.100824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.100852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.101023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.101051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.101215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.101244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.101390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.101416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.101558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.101583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.101736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.101780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.101943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.101971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.102102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.102128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.102271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.102298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.102420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.102446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.102619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.102661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.102827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.102853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.102971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.103012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.103174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.103199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.103320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.103361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.103582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.103636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.103798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.103823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.103965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.103995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.104113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.104140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.104286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.104312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.104451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.104476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.104626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.104652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.104765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.104792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.104934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.104960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.105080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.105105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.105226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.105251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.105366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.105391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.105510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.105535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.105679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.105727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.105894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.105921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.106085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.106111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.106232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.106257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.106372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.106397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.106543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.106571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.106709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.106737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.106905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.106930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.107116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.107143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.107299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.107326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.107494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.107520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.107691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.107717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.107906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.107934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.108108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.108132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.108263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.108287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.108436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.108471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.108621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.108648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.108800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.108836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.108980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.109008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.109162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.109187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.109327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.497 [2024-07-22 12:28:11.109352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.497 qpair failed and we were unable to recover it. 00:33:03.497 [2024-07-22 12:28:11.109499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.109524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.109675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.109701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.109820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.109846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.109959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.109983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.110102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.110127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.110324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.110352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.110517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.110541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.110670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.110696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.110820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.110845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.111016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.111045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.111232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.111257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.111417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.111445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.111603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.111645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.111806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.111831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.112001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.112025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.112138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.112164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.112284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.112309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.112464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.112491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.112684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.112710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.112832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.112862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.113005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.113029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.113162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.113205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.113344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.113369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.113525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.113551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.113747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.113773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.113912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.113938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.114084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.114108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.114230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.114255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.114395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.114420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.114538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.114562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.114682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.114708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.114851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.114876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.115055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.115084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.115250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.115278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.115416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.115441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.115587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.115611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.115753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.115778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.115917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.115945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.116079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.116104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.116230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.116254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.116427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.116452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.116594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.116633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.116795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.116820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.116967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.116993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.117162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.117190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.117351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.117377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.117596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.117633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.117801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.117825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.117977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.118004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.118182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.118225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.118395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.118419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.118534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.118573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.118725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.118752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.118920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.118944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.498 [2024-07-22 12:28:11.119104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.498 [2024-07-22 12:28:11.119129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.498 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.119298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.119339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.119500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.119528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.119693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.119721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.119889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.119922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.120082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.120114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.120240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.120267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.120404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.120432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.120574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.120598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.120755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.120796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.120995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.121024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.121157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.121184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.121353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.121378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.121540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.121567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.121729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.121758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.121929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.121995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.122142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.122168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.122286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.122310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.122481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.122522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.122690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.122719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.122890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.122926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.123064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.123088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.123203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.123228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.123346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.123371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.123549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.123575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.123731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.123770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.123963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.123993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.124152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.124180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.124350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.124375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.124519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.124544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.124724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.124754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.124899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.124928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.125072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.125103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.125314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.125364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.125521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.125549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.125718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.125744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.125895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.125924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.126037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.126063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.126230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.126258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.126449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.126474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.126601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.126637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.126807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.126832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.127025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.127053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.127227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.127274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.127438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.127462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.127647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.127676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.127816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.127841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.127970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.127994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.499 [2024-07-22 12:28:11.128140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.499 [2024-07-22 12:28:11.128165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.499 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.128329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.128357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.128488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.128514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.128642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.128684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.128822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.128847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.129006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.129033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.129225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.129250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.129419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.129444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.129590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.129620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.129785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.129810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.129989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.130013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.130138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.130168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.130315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.130341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.130501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.130529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.130706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.130731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.130867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.130892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.131071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.131096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.131210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.131235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.131357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.131381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.131517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.131544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.131693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.131718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.131887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.131913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.132077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.132104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.132260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.132287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.132459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.132485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.132651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.132680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.132839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.132864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.133008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.133032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.133168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.133352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.133379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.133559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.133583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.133722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.133748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.133913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.133938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.134130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.134178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.134334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.134361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.134518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.134547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.134704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.134730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.134841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.134865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.135010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.135034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.135184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.135210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.135348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.135373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.135518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.135543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.135727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.135753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.135892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.135931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.136072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.136096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.136217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.136241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.136384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.136408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.136576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.136605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.136803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.136828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.136949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.136973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.137115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.137140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.137314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.137338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.137529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.137561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.137702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.137727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.500 qpair failed and we were unable to recover it. 00:33:03.500 [2024-07-22 12:28:11.137846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.500 [2024-07-22 12:28:11.137870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.138013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.138038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.138181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.138205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.138354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.138381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.138537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.138564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.138758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.138783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.138954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.138978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.139148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.139175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.139342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.139367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.139512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.139536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.139683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.139709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.139818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.139859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.140045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.140070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.140187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.140213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.140386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.140410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.140567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.140596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.140785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.140813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.140942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.140970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.141124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.141148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.141337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.141365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.141498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.141525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.141684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.141713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.141882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.141907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.142063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.142091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.142263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.142288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.142430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.142463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.142579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.142604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.142754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.142780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.142922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.142961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.143111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.143136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.143282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.143306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.143477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.143519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.143670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.143699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.143866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.143891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.144008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.144033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.144229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.144256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.144405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.144433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.144639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.144668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.144802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.144825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.144977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.145018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.145201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.145228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.145432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.145457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.145627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.145652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.145776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.145800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.145918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.145942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.146112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.146139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.146282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.146306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.146476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.146517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.146646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.146673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.146799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.146826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.146984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.147008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.147121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.147145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.147291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.147314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.147444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.501 [2024-07-22 12:28:11.147469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.501 qpair failed and we were unable to recover it. 00:33:03.501 [2024-07-22 12:28:11.147642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.147667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.147808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.147835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.148002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.148031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.148161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.148188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.148328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.148352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.148494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.148518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.148691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.148718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.148876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.148903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.149043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.149070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.149194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.149218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.149384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.149409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.149581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.149609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.149785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.149814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.149978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.150005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.150158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.150184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.150340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.150368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.150499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.150523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.150658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.150682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.150830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.150857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.150989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.151015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.151183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.151208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.151348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.151391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.151524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.151551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.151675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.151703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.151835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.151860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.151991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.152015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.152158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.152185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.152359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.152386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.152558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.152585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.152720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.152744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.152864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.152889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.153053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.153080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.153216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.153241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.153391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.153417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.153542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.153566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.153712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.153740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.153903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.153927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.154110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.154137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.154300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.154324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.154482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.154691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.154716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.154831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.154856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.155009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.155034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.155248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.155296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.155489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.155513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.155650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.155678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.155876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.155902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.156016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.156041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.156184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.156208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.156374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.156402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.156559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.156586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.156777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.156802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.156921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.156945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.157115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.157140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.502 [2024-07-22 12:28:11.157300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.502 [2024-07-22 12:28:11.157328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.502 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.157510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.157538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.157697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.157722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.157864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.157889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.158036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.158078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.158208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.158236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.158408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.158432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.158620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.158649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.158805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.158837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.159012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.159039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.159203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.159228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.159346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.159372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.159543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.159570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.159770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.159795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.159955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.159978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.160146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.160175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.160299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.160326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.160487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.160514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.160675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.160701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.160871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.160913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.161090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.161115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.161257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.161282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.161425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.161450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.161611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.161643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.161789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.161816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.161949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.161976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.162120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.162150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.162274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.162316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.162501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.162529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.162663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.162692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.162864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.162889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.163087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.163116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.163303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.163331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.163461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.163489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.163639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.163675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.163797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.163823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.163969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.164006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.164147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.164174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.164343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.164369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.164569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.164598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.164796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.164825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.164952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.164980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.165162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.165187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.165357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.165411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.165570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.165599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.165767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.165797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.165944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.165969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.503 [2024-07-22 12:28:11.166138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.503 [2024-07-22 12:28:11.166178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.503 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.166343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.166369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.166515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.166540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.166704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.166730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.166897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.166931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.167096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.167124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.167280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.167324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.167497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.167523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.167687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.167716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.167871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.167900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.168066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.168095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.168260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.168286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.168439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.168465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.168608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.168639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.168761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.168786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.168937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.168963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.169113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.169155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.169313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.169341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.169498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.169527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.169691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.169718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.169867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.169892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.170036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.170062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.170186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.170212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.170355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.170381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.170495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.170539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.170709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.170738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.170871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.170899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.171038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.171064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.171209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.171236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.171403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.171430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.171564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.171592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.171756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.171782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.171959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.172000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.172154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.172181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.172343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.172380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.172516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.172542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.172689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.172715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.172835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.172861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.173057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.173082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.173223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.173248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.173400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.173426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.173590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.173636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.173819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.173848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.173995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.174021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.174170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.174212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.174372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.174400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.174568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.174597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.174744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.174774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.174890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.174921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.175064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.175092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.175246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.175275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.175447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.175472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.175655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.175683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.175854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.175880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.176062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.176088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.504 qpair failed and we were unable to recover it. 00:33:03.504 [2024-07-22 12:28:11.176227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.504 [2024-07-22 12:28:11.176252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.176392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.176433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.176567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.176596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.176764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.176790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.176912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.176937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.177077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.177103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.177249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.177277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.177434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.177462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.177609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.177641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.177756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.177781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.177894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.177926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.178075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.178103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.178294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.178320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.178445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.178471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.178623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.178648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.178797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.178840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.179037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.179063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.179207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.179232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.179374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.179399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.179648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.179674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.179801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.179828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.179964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.180006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.180188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.180220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.180451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.180507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.180698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.180724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.180886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.180918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.181083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.181108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.181294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.181336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.181499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.181525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.181688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.181716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.181893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.181926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.182049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.182090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.182258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.182283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.182442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.182470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.182659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.182687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.182844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.182872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.183072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.183098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.183232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.183260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.183384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.183412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.183571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.183756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.183782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.183956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.183990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.184145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.184173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.184304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.184332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.184491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.184520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.184717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.184745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.184927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.184952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.185104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.185151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.185333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.185358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.185515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.185708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.185736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.185872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.185901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.186040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.186065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.186211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.186237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.186357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.186382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.505 qpair failed and we were unable to recover it. 00:33:03.505 [2024-07-22 12:28:11.186530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.505 [2024-07-22 12:28:11.186555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.186681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.186707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.186823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.186847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.187016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.187041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.187218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.187255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.187423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.187453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.187643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.187671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.187809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.187837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.187974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.188002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.188135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.188161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.188279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.188304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.188502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.188529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.188682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.188711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.188849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.188875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.189032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.189064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.189223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.189264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.189424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.189453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.189631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.189668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.189832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.189859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.190014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.190039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.190162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.190199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.190371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.190395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.190536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.190563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.190717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.190745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.190930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.190957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.191124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.191333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.191361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.191559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.191584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.191723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.191749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.191896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.191924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.192114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.192142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.192304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.192329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.192469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.192511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.192663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.192689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.192855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.192881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.193106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.193132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.193251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.193277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.193460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.193487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.193617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.193663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.193794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.193822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.194005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.194029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.194152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.194176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.194353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.194380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.194541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.194567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.194772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.194798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.194965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.194989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.506 qpair failed and we were unable to recover it. 00:33:03.506 [2024-07-22 12:28:11.195135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.506 [2024-07-22 12:28:11.195182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.195343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.195370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.195510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.195536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.195713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.195739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.195904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.195933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.196082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.196107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.196249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.196273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.196398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.196423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.196570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.196595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.196783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.196810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.197086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.197144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.197342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.197367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.197528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.197555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.197724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.197753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.197934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.197959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.198113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.198138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.198257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.198282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.198493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.198534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.198682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.198708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.198826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.198851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.199042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.199084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.199245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.199273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.199450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.199476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.199623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.199648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.199838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.199867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.200022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.200050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.200200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.200229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.200376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.200405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.200526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.200552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.200747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.200773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.200918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.200961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.201098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.201123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.201235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.201260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.201455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.201484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.201666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.201691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.201834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.201859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.202018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.202047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.202247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.202273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.202413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.202438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.202587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.202625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.202773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.202816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.202996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.203021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.203189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.203214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.203379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.203405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.203565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.203593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.203732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.203760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.203953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.204023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.204211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.204236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.204394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.204425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.204585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.204623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.204775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.204816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.204984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.205010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.205159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.205203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.205353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.205382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.205587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.507 [2024-07-22 12:28:11.205622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.507 qpair failed and we were unable to recover it. 00:33:03.507 [2024-07-22 12:28:11.205776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.205801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.205985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.206013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.206149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.206175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.206289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.206314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.206457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.206485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.206640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.206682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.206804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.206829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.206977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.207002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.207181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.207206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.207328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.207353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.207522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.207547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.207744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.207770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.207887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.207915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.208085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.208284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.208314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.208451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.208478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.208671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.208697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.208861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.208890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.209052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.209080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.209230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.209258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.209394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.209419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.209538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.209564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.209748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.209774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.209893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.209924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.210092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.210118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.210278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.210306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.210486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.210513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.210687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.210715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.210883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.210914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.211104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.211133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.211268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.211296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.211443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.211470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.211639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.211665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.211783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.211808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.211958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.211983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.212172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.212198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.212344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.212370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.212528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.212556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.212695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.212722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.212864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.212905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.213070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.213099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.213222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.213248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.213397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.213423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.213601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.213634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.213773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.213798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.213950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.214124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.214283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.214475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.214627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.214801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.214960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.214996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.215160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.215185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.215325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.215369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.215519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.508 [2024-07-22 12:28:11.215548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.508 qpair failed and we were unable to recover it. 00:33:03.508 [2024-07-22 12:28:11.215700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.215730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.215918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.215944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.216078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.216106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.216268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.216296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.216445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.216470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.216635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.216660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.216849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.216878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.217009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.217038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.217230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.217258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.217406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.217431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.217550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.217576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.217706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.217731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.217927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.217954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.218155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.218181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.218327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.218352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.218507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.218533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.218700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.218729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.218893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.218918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.219038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.219064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.219213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.219405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.219432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.219604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.219635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.219802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.219831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.219968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.219995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.220152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.220180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.220374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.220400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.220595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.220632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.220799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.220824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.220970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.221011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.221178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.221204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.221313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.221338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.221457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.221484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.221679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.221708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.221872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.221897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.222011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.222037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.222199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.222224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.222365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.222390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.222605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.222637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.222760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.222785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.222907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.222932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.223115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.223141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.223283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.223309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.223454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.223479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.223675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.223705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.223860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.223890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.224050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.224076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.224222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.224247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.509 [2024-07-22 12:28:11.224365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.509 [2024-07-22 12:28:11.224390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.509 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.224533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.224559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.224741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.224768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.224883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.224926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.225080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.225109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.225334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.225392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.225543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.225572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.225724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.225750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.225941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.225969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.226225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.226274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.226430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.226458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.226620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.226663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.226788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.226814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.226974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.227003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.227169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.227195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.227317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.227342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.227488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.227513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.227632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.227658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.227827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.227853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.228020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.228048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.228246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.228271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.228454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.228482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.228622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.228647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.228835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.228864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.229015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.229043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.229165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.229193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.229324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.229350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.229546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.229574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.229777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.229802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.229949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.229992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.230127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.230153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.230323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.230366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.230522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.230550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.230706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.230734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.230902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.230928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.231045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.231082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.231221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.231246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.231385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.231414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.231578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.231603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.231764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.231792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.231940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.231968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.232127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.232152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.232296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.232322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.232513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.232541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.232696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.232726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.232919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.232974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.233117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.233145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.233334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.233367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.233528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.233557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.233726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.233752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.233924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.233950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.234115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.234146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.234333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.234361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.234485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.234512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.510 [2024-07-22 12:28:11.234703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.510 [2024-07-22 12:28:11.234728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.510 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.234850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.234877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.234994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.235018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.235166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.235190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.235345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.235371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.235513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.235555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.235715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.235744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.235920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.235946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.236094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.236119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.236261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.236286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.236404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.236430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.236559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.236586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.236763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.236790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.236909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.236952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.237116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.237141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.237260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.237453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.237478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.237604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.237639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.237791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.237819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.237949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.237977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.238116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.238141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.238296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.238322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.238464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.238492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.238626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.238655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.238798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.238824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.238973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.239015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.239150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.239177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.239303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.239331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.239471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.239495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.239610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.239646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.239817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.239847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.239975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.240002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.240147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.240172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.240282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.240317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.240494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.240521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.240706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.240732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.240852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.240877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.240991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.241016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.241157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.241183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.241326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.241368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.241563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.241590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.241739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.241765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.241910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.241935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.242132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.242157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.242275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.242301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.242472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.242514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.242666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.242705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.242862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.242890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.243070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.243096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.243247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.243273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.243384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.243409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.243552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.243583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.243736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.243763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.243944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.243986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.244167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.244195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.244341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.244379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.244522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.244548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.244666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.244692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.244859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.244887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.245054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.245113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.245253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.245283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.245422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.245452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.245637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.245666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.245805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.245834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.511 qpair failed and we were unable to recover it. 00:33:03.511 [2024-07-22 12:28:11.246021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.511 [2024-07-22 12:28:11.246047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.246240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.246268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.246398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.246426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.246624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.246650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.246797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.246822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.247009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.247037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.247179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.247204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.247350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.247376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.247584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.247609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.247759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.247786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.247958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.247984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.248135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.248178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.248339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.248366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.248515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.248540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.248690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.248743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.248939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.248993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.249162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.249188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.249352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.249378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.249520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.249547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.249720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.249746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.249866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.249891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.250003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.250030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.250212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.250239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.250411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.250437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.250580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.250606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.250737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.250762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.250879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.250904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.251036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.251064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.251256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.251282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.251444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.251472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.251634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.251662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.251782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.251810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.251977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.252002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.252168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.252195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.252334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.252359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.252485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.252511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.252690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.252716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.252834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.252859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.253023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.253053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.253165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.253190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.253402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.253428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.253595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.253631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.253785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.253814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.254026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.254090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.254279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.254305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.254431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.254456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.254632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.254676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.254805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.254839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.254974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.255001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.255148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.255173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.255376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.255405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.255562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.255589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.255784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.255810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.255932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.255957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.256094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.256119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.256265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.256293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.256446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.256475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.256602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.256636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.256791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.256816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.256948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.256976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.257131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.257157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.257278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.512 [2024-07-22 12:28:11.257303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.512 qpair failed and we were unable to recover it. 00:33:03.512 [2024-07-22 12:28:11.257476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.257502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.257665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.257693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.257885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.257911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.258074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.258106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.258285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.258313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.258497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.258525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.258695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.258720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.258869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.258895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.259039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.259066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.259215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.259257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.259396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.259421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.259546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.259572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.259772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.259800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.260003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.260028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.260172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.260197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.260384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.260412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.260567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.260596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.260798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.260824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.260965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.260991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.261152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.261180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.261341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.261370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.261538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.261563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.261712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.261737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.261879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.261916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.262102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.262127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.262250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.262276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.262454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.262480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.262663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.262692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.262812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.262840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.263038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.263201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.263367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.263528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.263721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.263882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.263994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.264035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.264189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.264216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.264384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.264409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.264527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.264554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.264700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.264741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.264915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.264942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.265087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.265130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.265270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.265295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.265449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.265473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.265648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.265683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.265807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.265835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.265974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.265999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.266143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.266168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.266336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.266364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.266512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.266539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.266733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.266758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.266897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.266926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.267083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.267113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.267282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.267309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.267475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.267500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.267648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.267674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.267852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.267880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.268062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.268129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.268305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.268330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.268476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.268502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.268630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.268656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.268797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.268826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.268999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.269025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.269170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.269195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.269400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.269425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.269559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.269584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.269729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.269755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.269902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.269928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.270111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.270139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.270358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.270424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.270587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.270631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.270780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.270813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.270992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.271025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.513 [2024-07-22 12:28:11.271163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.513 [2024-07-22 12:28:11.271189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.513 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.271332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.271357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.271483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.271508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.271640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.271686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.271831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.271857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.272034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.272059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.272249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.272278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.272463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.272491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.272700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.272750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.272939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.272965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.273091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.273116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.273283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.273309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.273455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.273484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.273676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.273701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.273897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.273926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.274104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.274129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.274252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.274277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.274399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.274424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.274569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.274627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.274758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.274786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.274946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.274974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.275105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.275130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.275270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.275295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.275505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.275532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.275691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.275719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.275861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.275887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.276059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.276216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.276243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.276427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.276455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.276628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.276654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.276843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.276871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.277001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.277029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.277164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.277194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.277364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.277389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.277536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.277561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.277705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.277746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.277906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.277935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.278101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.278126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.278274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.278299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.278439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.278469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.278650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.278719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.278858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.278883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.279049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.279073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.279209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.279235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.279380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.279405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.279531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.279568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.279764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.279793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.279965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.279993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.280154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.280179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.280350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.280375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.280514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.280542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.280716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.280742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.280885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.280928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.281077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.281102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.281244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.281270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.281436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.281464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.281592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.281626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.281773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.281798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.281933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.281957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.282111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.282136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.282310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.282336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.282537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.282562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.282699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.282724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.282909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.282937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.283203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.283252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.283406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.283430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.283625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.283659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.283823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.514 [2024-07-22 12:28:11.284005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.514 [2024-07-22 12:28:11.284033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.514 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.284198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.284223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.284409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.284438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.284585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.284611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.284766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.284791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.284954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.284980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.285143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.285171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.285329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.285357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.285538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.285566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.285769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.285796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.285957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.285986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.286137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.286166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.286342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.286393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.286541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.286566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.286710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.286737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.286875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.286915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.287077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.287105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.287270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.287296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.287441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.287484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.287651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.287677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.287850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.287875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.288022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.288048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.288244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.288272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.288407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.288436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.288598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.288632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.288768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.288795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.288955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.288980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.289124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.289165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.289365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.289426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.289610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.289662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.289799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.289828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.290032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.290058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.290205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.290230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.290343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.290369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.290491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.290515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.290653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.290679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.290831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.290874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.291043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.291068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.291261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.291460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.291490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.291667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.291709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.291881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.291906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.292078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.292103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.292256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.292298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.292469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.292497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.292639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.292664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.292810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.292850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.292974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.293003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.293138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.293167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.293358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.293383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.293546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.293575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.293734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.293764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.293990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.294045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.294238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.294264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.294426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.294454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.294611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.294645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.294803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.294832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.294981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.295013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.295161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.295186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.295328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.295354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.295502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.515 [2024-07-22 12:28:11.295531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.515 qpair failed and we were unable to recover it. 00:33:03.515 [2024-07-22 12:28:11.295675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.295701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.295872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.295913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.296108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.296134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.296297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.296338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.296506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.296532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.296676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.296702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.296870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.296898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.297102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.297127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.297300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.297326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.297512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.297540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.297700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.297728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.297890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.297919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.298081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.298107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.298294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.298323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.298507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.298535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.298736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.298762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.298904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.298931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.299080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.299106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.299300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.299328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.299465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.299494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.299634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.299661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.299779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.299804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.299973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.300001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.300131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.300159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.300301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.300326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.300467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.300493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.300622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.300648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.300821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.300863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.301005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.301031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.301155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.301181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.301300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.301325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.301507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.301536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.301740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.301766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.301958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.301986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.302105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.302133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.302316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.302344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.302532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.302558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.302719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.302748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.302901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.302930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.303088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.303116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.303311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.303337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.303499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.303528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.303683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.303712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.303953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.304003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.304161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.304187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.304330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.304373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.304510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.304543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.304716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.304742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.304891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.304917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.305062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.305088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.305233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.305274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.305426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.305455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.305636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.305678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.305845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.305871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.306056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.306081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.306277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.306343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.306509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.306535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.306686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.306712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.306871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.306914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.307073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.307101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.307239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.307265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.307413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.307439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.307582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.307609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.307771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.307799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.307991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.308016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.308191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.308220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.308414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.308443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.308567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.308597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.308775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.308801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.516 qpair failed and we were unable to recover it. 00:33:03.516 [2024-07-22 12:28:11.308944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.516 [2024-07-22 12:28:11.308986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.309150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.309178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.309409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.309465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.309636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.309663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.309804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.309831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.310008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.310037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.310197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.310223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.310341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.310367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.310513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.310555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.310738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.310768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.310934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.310962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.311116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.311142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.311259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.311284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.311458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.311488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.311623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.311651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.311818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.311843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.311977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.312002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.312152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.312178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.312326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.312355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.312508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.312535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.312659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.312686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.312831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.312856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.313012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.313056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.313197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.313223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.313336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.313362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.313564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.313593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.313768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.313794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.313944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.313969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.314110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.314135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.314271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.314299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.314458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.314483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.314599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.314630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.314760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.314801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.314954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.314981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.315138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.315166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.315355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.315381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.315529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.315554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.315693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.315719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.315861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.315887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.316029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.316054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.316218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.316245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.316400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.316429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.316564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.316594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.316770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.316795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.316935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.316977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.317162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.317195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.317383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.317412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.317547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.317572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.317723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.317749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.317912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.317940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.318096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.318123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.318309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.318334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.318454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.318480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.318623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.318650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.318790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.318819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.319010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.319035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.319203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.319232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.319391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.319419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.319603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.319638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.319801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.319827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.320009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.320038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.517 [2024-07-22 12:28:11.320226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.517 [2024-07-22 12:28:11.320252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.517 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.320421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.320449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.320612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.320643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.320788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.320831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.320986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.321015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.321149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.321176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.321394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.321563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.321591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.321743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.321772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.321965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.321991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.322130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.322155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.322294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.322320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.322446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.322471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.322654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.322680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.322824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.322849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.323013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.323042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.323199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.323226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.323380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.323408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.323632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.323677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.323823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.323850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.324008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.324033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.324208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.324249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.324391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.324418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.324587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.324635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.324770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.324798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.324961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.324991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.325142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.325168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.325312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.325338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.325506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.325533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.325691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.325721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.325896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.325922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.326084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.326112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.326260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.326288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.326451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.326479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.326618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.326645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.326785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.326810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.326975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.327003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.327182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.327236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.327414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.327440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.327591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.327622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.327743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.327768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.327912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.327938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.328088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.328114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.328262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.328288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.328487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.328515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.328662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.328690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.328857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.328882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.329003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.329028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.329200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.329226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.329385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.329412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.329601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.329631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.329743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.329786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.329978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.330011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.330274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.330324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.330494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.330519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.330689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.330733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.330847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.330874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.331057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.331084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.331242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.331268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.331412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.331437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.331632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.331659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.331831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.331856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.332023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.332049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.332159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.332183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.332333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.332358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.332503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.332531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.332711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.332738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.332925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.332953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.333136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.333164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.333346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.333510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.518 [2024-07-22 12:28:11.333535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.518 qpair failed and we were unable to recover it. 00:33:03.518 [2024-07-22 12:28:11.333657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.333683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.333805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.333831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.333998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.334024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.334170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.334195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.334338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.334364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.334512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.334540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.334707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.334735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.334902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.334928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.335096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.335121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.335292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.335320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.335478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.335505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.335656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.335682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.335867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.335896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.336065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.336092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.336286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.336354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.336542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.336567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.336731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.336759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.336886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.336915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.337131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.337190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.337324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.337349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.337495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.337537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.337708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.337735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.337859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.337888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.338031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.338056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.338175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.338214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.338340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.338370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.338500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.338529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.338699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.338726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.338849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.338875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.339042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.339083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.339245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.339271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.339442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.339469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.339628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.339670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.339817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.339843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.340001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.340030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.340228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.340254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.340421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.340448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.340640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.340668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.340830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.340858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.340997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.341023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.341170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.341195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.341340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.341366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.341510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.341552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.341744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.341769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.341918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.341943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.342090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.342116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.342312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.342340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.342507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.342532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.342692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.342720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.342879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.342912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.343099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.343127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.343290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.343316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.343476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.343503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.343662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.343691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.343841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.343870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.344012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.344037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.344209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.344251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.344413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.344441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.344618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.344646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.344794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.344819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.344939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.344980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.345145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.345171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.345338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.345363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.345505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.345530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.345691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.345720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.345878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.345907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.346134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.346195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.346361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.346387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.346544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.346572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.346752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.346779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.346903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.346928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.347045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.347071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.347220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.347245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.347415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.347443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.347599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.347633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.347821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.347846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.348034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.348062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.348188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.348216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.348386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.348412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.348562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.519 [2024-07-22 12:28:11.348587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.519 qpair failed and we were unable to recover it. 00:33:03.519 [2024-07-22 12:28:11.348738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.348764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.348909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.348935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.349102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.349144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.349307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.349332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.349490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.349517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.349656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.349685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.349844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.349872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.350017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.350042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.350184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.350226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.350384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.350413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.350583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.350629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.350804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.350829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.350982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.351006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.351216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.351362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.351387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.351533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.351559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.351748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.351777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.351904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.351933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.352135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.352161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.352309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.352334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.352494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.352522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.352708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.352734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.352856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.352897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.353066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.353092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.353248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.353273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.353465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.353492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.353608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.353655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.353821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.353848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.354002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.354030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.354202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.354227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.354374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.354400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.354563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.354592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.354788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.354814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.354934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.354960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.355074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.355102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.355224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.355250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.355399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.355425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.355572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.355620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.355787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.355815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.355961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.355986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.356135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.356161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.356302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.356327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.356469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.356495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.356639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.356666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.356788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.356832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.356997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.357022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.357137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.357162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.357308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.357332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.357497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.357522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.357644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.357670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.357815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.357844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.358033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.358060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.358249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.358277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.358399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.358427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.358610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.358643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.358815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.358840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.358989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.359030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.359194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.359223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.359367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.359394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.359531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.359557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.359719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.359749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.359934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.359963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.360129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.360155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.360299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.360325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.360514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.360542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.360697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.360723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.520 qpair failed and we were unable to recover it. 00:33:03.520 [2024-07-22 12:28:11.360867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.520 [2024-07-22 12:28:11.360893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.361031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.361057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.361212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.361241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.361429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.361455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.361603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.361657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.361801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.361827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.361970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.361995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.362154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.362182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.362343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.362373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.362525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.362550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.362673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.362701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.362847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.362873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.363027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.363062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.363258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.363284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.363400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.363426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.363574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.363600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.363778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.363808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.363948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.363974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.364162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.364191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.364371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.364400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.364561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.364587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.364740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.364767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.364878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.364904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.365028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.365054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.365214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.365251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.365437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.365463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.365642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.365672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.365808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.365837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.366039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.366075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.366195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.366221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.366365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.366391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.366540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.366566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.366730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.366759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.366922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.366947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.367132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.367161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.367319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.367349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.367506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.367534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.367709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.367736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.367862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.367888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.368034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.368059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.368206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.368235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.368376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.368402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.368542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.368568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.368678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.368704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.521 [2024-07-22 12:28:11.368849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.521 [2024-07-22 12:28:11.368875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.521 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.369050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.369077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.369246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.369275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.369401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.369430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.369610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.369646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.369809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.369835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.369950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.369991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.370148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.370177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.370359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.370386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.370533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.370560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.370697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.370724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.370896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.370921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.371103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.371129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.371273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.371299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.371438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.371463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.371599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.371629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.371743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.371769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.371909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.371934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.372103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.372130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.372304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.372332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.372487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.372514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.372705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.372730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.372880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.372904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.373049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.373074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.373243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.373267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.373415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.373440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.373585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.373610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.373754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.373779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.373895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.373919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.374038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.374063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.374228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.374253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.374374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.374417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.374543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.374571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.374746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.374772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.374923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.374958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.375080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.375106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.375253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.807 [2024-07-22 12:28:11.375282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.807 qpair failed and we were unable to recover it. 00:33:03.807 [2024-07-22 12:28:11.375468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.375510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.375656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.375683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.375827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.375854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.376077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.376124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.376309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.376334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.376531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.376559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.376711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.376737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.376849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.376875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.377019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.377045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.377157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.377198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.377383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.377410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.377578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.377603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.377752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.377778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.377973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.378001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.378159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.378187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.378364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.378390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.378562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.378587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.378763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.378788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.378914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.378941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.379097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.379125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.379292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.379318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.379437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.379478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.379637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.379667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.379802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.379841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.379991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.380017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.380206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.380235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.380423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.380451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.380593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.380638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.380800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.380825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.380968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.381011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.381199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.381228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.381412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.381440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.381604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.381636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.381788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.381816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.381996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.382024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.382210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.382260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.382446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.382472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.382620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.382661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.382814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.382841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.382968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.382996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.383169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.383200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.383387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.383415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.383581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.383607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.383752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.383794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.383989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.384015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.384138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.384165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.384308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.384334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.384479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.384507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.384649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.384674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.384840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.384865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.385039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.385068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.385275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.385320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.385484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.385508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.385646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.385691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.385851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.385880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.808 qpair failed and we were unable to recover it. 00:33:03.808 [2024-07-22 12:28:11.386036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.808 [2024-07-22 12:28:11.386092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.386262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.386287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.386451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.386492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.386651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.386680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.386844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.386870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.387046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.387071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.387252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.387280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.387431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.387456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.387606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.387637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.387814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.387840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.387988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.388028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.388190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.388219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.388372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.388422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.388592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.388634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.388823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.388851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.389010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.389039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.389236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.389262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.389372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.389397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.389572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.389621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.389815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.389841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.389954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.389980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.390149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.390174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.390318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.390343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.390472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.390502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.390676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.390703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.390847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.390871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.391024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.391069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.391236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.391267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.391404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.391433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.391601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.391636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.391781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.391807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.391976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.392005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.392159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.392187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.392377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.392402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.392565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.392593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.392772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.392798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.392927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.392953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.393123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.393149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.393366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.393423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.393543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.393578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.393735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.393762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.393884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.393911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.394051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.394077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.394252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.394280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.394409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.394449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.394621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.394647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.394791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.394817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.394977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.395005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.395173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.395199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.395374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.395399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.395541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.395567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.395687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.395713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.395857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.395899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.396046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.396073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.396216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.396242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.396415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.809 [2024-07-22 12:28:11.396444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.809 qpair failed and we were unable to recover it. 00:33:03.809 [2024-07-22 12:28:11.396574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.396603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.396858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.396885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.397047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.397076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.397248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.397277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.397436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.397464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.397595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.397629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.397758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.397785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.397954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.397982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.398147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.398199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.398361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.398387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.398546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.398576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.398764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.398792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.398958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.398988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.399134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.399161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.399330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.399378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.399554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.399765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.399792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.399958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.399984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.400179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.400208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.400369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.400398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.400602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.400638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.400811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.400837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.401012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.401040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.401171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.401205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.401391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.401420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.401591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.401623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.401769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.401794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.401981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.402009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.402202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.402256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.402426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.402452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.402576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.402603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.402726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.402753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.402870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.402896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.403039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.403064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.403206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.403232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.403377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.403403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.403565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.403594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.403767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.403794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.403916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.403942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.404114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.404157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.404320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.404349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.404543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.404569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.404692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.404719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.404856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.404882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.405033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.405062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.405259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.405284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.810 [2024-07-22 12:28:11.405445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.810 [2024-07-22 12:28:11.405474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.810 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.405655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.405699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.405854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.405880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.406080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.406106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.406224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.406269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.406465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.406491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.406603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.406635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.406786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.406813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.406979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.407008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.407201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.407227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.407415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.407443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.407582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.407608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.407745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.407941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.407985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.408218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.408243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.408384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.408410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.408566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.408594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.408766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.408796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.408940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.408968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.409119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.409144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.409269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.409294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.409457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.409485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.409681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.409709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.409856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.409882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.410069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.410097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.410288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.410319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.410448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.410482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.410630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.410657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.410778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.410821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.410956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.410985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.411163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.411190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.411374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.411400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.411557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.411586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.411753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.411779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.411922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.411948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.412090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.412116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.412277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.412305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.412492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.412521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.412706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.412738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.412904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.412931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.413041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.413083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.413236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.413265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.413428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.413456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.413622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.413648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.413777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.413804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.413922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.413948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.414117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.414146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.414358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.414543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.414571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.414750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.414778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.414949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.414990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.415181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.415208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.415399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.415425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.415574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.415600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.415806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.415832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.415945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.415971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.416109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.811 [2024-07-22 12:28:11.416134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.811 qpair failed and we were unable to recover it. 00:33:03.811 [2024-07-22 12:28:11.416283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.416329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.416460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.416490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.416660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.416687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.416858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.416884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.417082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.417111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.417333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.417386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.417573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.417603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.417769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.417799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.417966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.417997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.418141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.418192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.418366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.418393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.418569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.418605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.418821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.418847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.418996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.419022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.419195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.419221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.419366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.419420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.419604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.419638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.419784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.419813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.419978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.420011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.420158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.420184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.420335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.420377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.420568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.420597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.420780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.420806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.421001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.421030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.421193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.421223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.421423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.421485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.421673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.421700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.421901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.421931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.422086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.422115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.422301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.422366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.422502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.422528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.422721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.422750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.422887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.422916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.423074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.423102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.423235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.423261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.423399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.423453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.423658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.423685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.423799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.423825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.423938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.423963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.424141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.424183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.424343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.424377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.424563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.424598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.424742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.424768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.424927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.424967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.425153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.425182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.425343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.425369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.425486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.425520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.425666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.425694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.425882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.425912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.426154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.426204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.426378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.426404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.426551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.426577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.426755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.426797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.427023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.427076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.427275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.427302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.427442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.427471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.427618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.812 [2024-07-22 12:28:11.427645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.812 qpair failed and we were unable to recover it. 00:33:03.812 [2024-07-22 12:28:11.427814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.427857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.428050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.428076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.428263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.428292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.428448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.428478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.428636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.428666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.428840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.428866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.429021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.429047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.429246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.429272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.429409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.429434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.429583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.429627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.429800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.429829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.429957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.429987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.430142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.430171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.430339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.430364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.430480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.430506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.430654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.430681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.430820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.430851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.431008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.431035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.431226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.431254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.431444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.431470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.431659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.431688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.431847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.431873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.431987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.432029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.432190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.432223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.432368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.432397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.432584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.432617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.432818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.432844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.432979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.433008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.433258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.433310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.433507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.433534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.433675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.433705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.433893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.433922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.434121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.434147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.434258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.434284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.434453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.434496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.434637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.434667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.434822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.434851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.434997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.435023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.435163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.435189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.435349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.435383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.435569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.435597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.435783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.435809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.435930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.435985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.436109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.436139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.436300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.436329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.436492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.436530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.436719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.436748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.436930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.436959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.437137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.437163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.437308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.813 [2024-07-22 12:28:11.437334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.813 qpair failed and we were unable to recover it. 00:33:03.813 [2024-07-22 12:28:11.437488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.437520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.437683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.437714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.437865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.437894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.438049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.438075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.438198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.438223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.438370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.438399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.438581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.438625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.438748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.438774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.438921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.438947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.439136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.439165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.439415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.439466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.439627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.439654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.439818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.439847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.440007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.440038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.440167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.440207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.440367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.440404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.440569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.440597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.440775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.440804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.440975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.441051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.441214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.441240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.441380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.441424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.441580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.441609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.441784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.441813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.441977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.442002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.442139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.442171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.442349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.442378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.442507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.442536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.442684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.442711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.442855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.442881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.443024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.443053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.443247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.443276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.443469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.443495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.443656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.443685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.443860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.443888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.444069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.444095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.444263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.444289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.444439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.444465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.444609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.444640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.444797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.444823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.444994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.445020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.445168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.445201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.445392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.445421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.445582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.445611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.445759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.445785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.445931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.445973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.446166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.446194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.446358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.446387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.446527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.446553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.446700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.446726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.446922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.446951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.447083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.447111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.447281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.447309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.447499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.447528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.447694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.447723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.447962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.448014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.814 [2024-07-22 12:28:11.448177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.814 [2024-07-22 12:28:11.448202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.814 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.448361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.448387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.448507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.448533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.448716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.448745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.448893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.448919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.449063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.449105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.449275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.449303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.449485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.449514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.449671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.449698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.449880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.449909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.450085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.450112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.450265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.450307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.450481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.450508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.450656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.450699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.450861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.450890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.451024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.451203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.451230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.451418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.451447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.451620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.451651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.451804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.451833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.451977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.452003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.452133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.452159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.452344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.452370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.452542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.452572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.452788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.452815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.452975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.453009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.453181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.453208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.453399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.453425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.453603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.453641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.453828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.453854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.453978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.454005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.454126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.454152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.454296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.454322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.454506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.454532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.454705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.454734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.454884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.454911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.455084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.455110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.455228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.455266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.455439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.455483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.455641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.455671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.455873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.455900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.456064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.456093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.456223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.456251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.456403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.456432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.456601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.456633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.456823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.456851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.457009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.457038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.457222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.457248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.457417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.457443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.457634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.457663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.457819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.457848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.458018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.458047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.458190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.458217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.458361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.458388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.458546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.458589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.458765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.458794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.458986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-07-22 12:28:11.459019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.815 qpair failed and we were unable to recover it. 00:33:03.815 [2024-07-22 12:28:11.459183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.459222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.459392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.459418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.459586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.459618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.459789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.459815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.459934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.459961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.460109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.460135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.460283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.460309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.460498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.460556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.460711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.460932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.460958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.461113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.461157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.461357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.461400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.461568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.461593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.461724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.461752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.461933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.461976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.462141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.462185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.462351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.462379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.462547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.462573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.462742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.462785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.462929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.462976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.463136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.463165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.463297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.463323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.463444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.463471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.463584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.463610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.463762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.463788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.463903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.463930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.464047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.464074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.464208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.464235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.464407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.464432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.464555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.464581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.464740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.464769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.464955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.464998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.465151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.465179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.465319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.465345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.465494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.465520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.465704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.465749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.465920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.465951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.466117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.466147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.466354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.466404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.466571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.466598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.466789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.466816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.466949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.466982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.467167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.467199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.467378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.467407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.467563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.467592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.467760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.467786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.467933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.467959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.468088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.468117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.816 [2024-07-22 12:28:11.468301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-07-22 12:28:11.468336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.816 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.468485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.468514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.468673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.468700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.468845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.468882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.469052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.469081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.469232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.469261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.469422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.469462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.469607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.469639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.469754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.469784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.469917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.469947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.470169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.470220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.470379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.470408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.470596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.470631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.470762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.470787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.470977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.471006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.471265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.471318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.471480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.471506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.471646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.471672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.471827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.471853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.472001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.472030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.472222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.472251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.472410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.472438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.472571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.472600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.472766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.472807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.472977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.473021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.473189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.473234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.473367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.473409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.473553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.473579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.473755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.473800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.473968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.473996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.474180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.474223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.474344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.474369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.474541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.474567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.474696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.474740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.474910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.474953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.475160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.475218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.475397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.475423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.475544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.475571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.475727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.475756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.475906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.475951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.476155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.476203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.476322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.476349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.476526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.476551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.476736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.476779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.476946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.476975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.477130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.477173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.477289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.477319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.477487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.477513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.477713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.477761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.477957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.478002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.478141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.478167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.478296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.478322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.478490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.478683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.478712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.478898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.478941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.479076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.479106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.479302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.479328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.479496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.479521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.817 qpair failed and we were unable to recover it. 00:33:03.817 [2024-07-22 12:28:11.479697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-07-22 12:28:11.479742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.479867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.479894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.480057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.480101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.480232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.480258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.480404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.480432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.480552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.480578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.480729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.480782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.480918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.480962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.481118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.481160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.481289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.481316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.481494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.481520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.481671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.481701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.481876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.481902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.482028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.482055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.482208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.482234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.482381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.482406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.482553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.482578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.482753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.482779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.482946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.482989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.483155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.483201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.483343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.483368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.483515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.483541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.483678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.483729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.483872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.483898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.484045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.484073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.484217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.484243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.484411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.484436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.484582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.484607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.484817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.484846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.485004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.485050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.485224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.485267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.485438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.485463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.485607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.485638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.485807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.485851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.486023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.486066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.486230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.486273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.486421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.486448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.486628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.486654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.486830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.486879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.487065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.487094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.487309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.487360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.487531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.487557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.487728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.487772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.487934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.487977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.488168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.488213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.488331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.488359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.488478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.488505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.488705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.488750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.488919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.488964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.489114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.489140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.489258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.489283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.489453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.489479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.489632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.489659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.489825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.489866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.490060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.490087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.490283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.490308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.818 [2024-07-22 12:28:11.490452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.818 [2024-07-22 12:28:11.490478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.818 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.490625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.490651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.490791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.490834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.490961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.490988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.491142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.491167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.491325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.491350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.491497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.491527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.491689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.491734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.491897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.491925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.492115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.492142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.492281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.492306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.492454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.492480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.492600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.492643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.492789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.492815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.493029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.493199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.493371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.493522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.493696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.493865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.493995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.494020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.494140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.494167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.494289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.494316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.494463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.494488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.494618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.494644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.494781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.494825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.494991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.495034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.495159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.495186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.495343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.495368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.495516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.495541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.495719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.495763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.495930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.495973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.496092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.496117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.496269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.496296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.496445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.496472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.496608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.496639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.496807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.496853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.497059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.497103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.497249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.497275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.497420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.497446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.497596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.497629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.497790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.497834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.498025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.498070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.498238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.498280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.498417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.498442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.498626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.498651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.498793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.498841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-22 12:28:11.499038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.819 [2024-07-22 12:28:11.499082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.499253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.499281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.499468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.499664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.499692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.499857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.499887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.500083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.500127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.500289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.500333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.500490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.500516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.500689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.500734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.500883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.500935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.501278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.501314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.501475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.501502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.501657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.501684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.501813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.501838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.501963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.501987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.502107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.502132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.502299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.502324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.502475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.502500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.502667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.502694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.502804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.502829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.502969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.502994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.503118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.503145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.503314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.503340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.503485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.503510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.503638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.503665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.503834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.503879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.504089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.504117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.504306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.504332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.504476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.504501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.504684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.504711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.504830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.504855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.504976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.505001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.505121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.505147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.505316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.505341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.505481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.505507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.505623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.505648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.505786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.505811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.505978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.506022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.506171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.506199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.506342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.506372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.506513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.506539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.506686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.506730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.506897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.506944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.507137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.507166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.507325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.507350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.507490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.507517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.507658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.507686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.507846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.507889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.508088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.508117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.508277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.508304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.508453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.508479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.508676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.508704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.508888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.508934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.509116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.509142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.509266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.509293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.509431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.509456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-22 12:28:11.509627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.820 [2024-07-22 12:28:11.509654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.509799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.509842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.510006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.510048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.510242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.510284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.510401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.510426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.510566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.510592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.510780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.510823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.511022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.511049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.511226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.511268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.511421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.511447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.511593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.511635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.511794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.511820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.512023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.512065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.512211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.512254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.512394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.512419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.512591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.512631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.512779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.512822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.513065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.513117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.513279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.513322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.513469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.513496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.513689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.513733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.513898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.513945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.514080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.514123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.514271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.514301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.514448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.514473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.514687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.514715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.514867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.514896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.515082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.515125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.515300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.515326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.515448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.515473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.515666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.515691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.515832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.515876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.516047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.516089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.516236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.516262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.516385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.516410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.516582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.516622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.516790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.516833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.516986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.517030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.517199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.517224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.517372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.517397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.517510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.517536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.517679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.517724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.517870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.517895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.518057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.518086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.518249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.518274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.518451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.518475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.518645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.518689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.518853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.518901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.519096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.519124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.519287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.519312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.519438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.519464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.519636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.519682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.519853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.519896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.520036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.520062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.520169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.520195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.520366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.520392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-22 12:28:11.520505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.821 [2024-07-22 12:28:11.520530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.520701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.520727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.520853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.520879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.521053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.521078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.521248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.521273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.521396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.521422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.521571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.521596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.521713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.521743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.521891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.521918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.522065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.522091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.522271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.522297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.522418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.522443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.522586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.522626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.522744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.522769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.522905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.522948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.523117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.523161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.523308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.523333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.523479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.523504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.523683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.523728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.523932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.524080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.524105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.524226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.524253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.524377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.524403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.524548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.524573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.524744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.524787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.524997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.525040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.525236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.525264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.525419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.525445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.525591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.525636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.525805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.525830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.526004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.526047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.526242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.526270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.526449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.526475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.526636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.526678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.526852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.526894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.527075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.527118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.527261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.527288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.527432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.527458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.527628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.527654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.527820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.527863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.528013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.528056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.528289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.528335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.528473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.528499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.528672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.528701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.528855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.528899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.529064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.529107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.529279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.529304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.529422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.529455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.529600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.529632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.529800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.529843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.530017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.530059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.530224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.530252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.530412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.530437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.530556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.530582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.530746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.822 [2024-07-22 12:28:11.530773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.822 qpair failed and we were unable to recover it. 00:33:03.822 [2024-07-22 12:28:11.530977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.531019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.531163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.531205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.531375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.531401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.531550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.531576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.531776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.531821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.531965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.531995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.532166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.532195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.532323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.532352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.532480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.532506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.532661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.532687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.532872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.532900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.533056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.533084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.533245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.533274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.533426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.533454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.533633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.533677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.533847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.533872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.534107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.534148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.534310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.534338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.534469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.534497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.534678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.534710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.534852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.534881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.535020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.535048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.535272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.535300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.535454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.535483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.535680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.535707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.535878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.535920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.536049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.536078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.536243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.536272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.536413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.536456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.536637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.536663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.536835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.536861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.537024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.537053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.537212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.537240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.537405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.537434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.537590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.537628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.537764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.537790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.537934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.537960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.538094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.538123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.538251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.538280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.538438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.538467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.538631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.538657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.538774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.538800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.538972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.539000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.539181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.539209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.539346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.539392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.539577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.539620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.539761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.539787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.539937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.539963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.540104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.540130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.540257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.540286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.540448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.540477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.823 [2024-07-22 12:28:11.540623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.823 [2024-07-22 12:28:11.540649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.823 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.540822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.540848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.541023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.541051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.541234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.541263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.541395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.541423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.541590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.541622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.541745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.541771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.541925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.541953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.542137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.542166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.542418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.542451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.542619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.542662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.542801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.542827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.543002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.543028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.543189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.543214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.543388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.543416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.543535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.543563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.543709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.543737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.543880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.543906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.544065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.544093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.544258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.544302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.544462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.544490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.544655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.544682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.544850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.544875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.545106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.545170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.545315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.545342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.545541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.545570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.545725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.545752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.545899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.545940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.546124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.546152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.546396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.546441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.546634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.546660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.546799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.546824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.546969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.547010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.547193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.547239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.547395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.547423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.547584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.547618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.547785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.547811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.547962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.547988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.548131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.548160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.548309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.548337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.548471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.548499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.548664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.548691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.548812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.548839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.548987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.549013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.549191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.549219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.549349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.549374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.549496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.549523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.549739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.549765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.549885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.549932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.550118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.550144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.550305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.550334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.550491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.550521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.550652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.550681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.550871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.550897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.551027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.551055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.551217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.551246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.551397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.551426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.824 qpair failed and we were unable to recover it. 00:33:03.824 [2024-07-22 12:28:11.551589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.824 [2024-07-22 12:28:11.551620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.551734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.551776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.551933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.551962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.552088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.552117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.552277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.552302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.552428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.552454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.552592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.552624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.552799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.552825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.553012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.553038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.553228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.553257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.553408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.553436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.553602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.553634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.553779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.553805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.553988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.554017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.554188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.554213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.554353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.554379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.554528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.554553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.554683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.554709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.554853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.554878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.555050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.555078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.555267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.555297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.555466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.555494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.555662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.555689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.555797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.555822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.555944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.555970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.556139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.556164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.556343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.556368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.556545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.556570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.556702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.556729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.556875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.556918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.557076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.557104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.557273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.557299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.557439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.557465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.557618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.557644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.557793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.557819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.557991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.558019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.558165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.558190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.558326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.558367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.558506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.558534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.558691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.558720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.558856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.558882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.559026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.559051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.559161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.559186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.559330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.559355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.559467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.559493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.559641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.559667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.559835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.559864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.560024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.560052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.560226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.560252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.560440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.560468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.560656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.560682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.560821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.560862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.561008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.561033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.561201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.561242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.561429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.561466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.561654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.561683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.561823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.561849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.825 qpair failed and we were unable to recover it. 00:33:03.825 [2024-07-22 12:28:11.561996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.825 [2024-07-22 12:28:11.562022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.562194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.562236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.562366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.562395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.562553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.562581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.562779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.562806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.562982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.563015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.563186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.563214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.563357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.563383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.563528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.563570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.563771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.563799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.563952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.563980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.564146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.564173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.564288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.564315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.564491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.564520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.564673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.564702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.564893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.564918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.565076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.565104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.565285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.565346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.565476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.565505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.565671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.565698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.565805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.565846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.566049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.566116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.566300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.566329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.566530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.566555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.566753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.566781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.566952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.566977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.567089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.567115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.567261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.567287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.567402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.567428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.567539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.567564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.567712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.567738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.567876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.567906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.568064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.568093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.568274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.568302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.568461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.568490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.568660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.568686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.568795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.568820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.568962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.568988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.569182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.569210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.569354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.569381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.569525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.569552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.569724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.569754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.569936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.569965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.570114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.570139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.570304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.570346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.570538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.570567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.570742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.570768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.570906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.570932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.571068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.571112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.826 [2024-07-22 12:28:11.571283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.826 [2024-07-22 12:28:11.571316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.826 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.571493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.571537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.571697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.571723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.571867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.571893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.572036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.572062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.572232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.572260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.572423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.572448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.572562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.572588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.572750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.572779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.572961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.572990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.573167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.573192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.573337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.573363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.573502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.573530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.573703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.573729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.573899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.573924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.574087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.574116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.574268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.574296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.574469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.574495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.574639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.574665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.574834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.574860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.575030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.575057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.575206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.575248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.575442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.575467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.575636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.575669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.575829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.575854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.576029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.576076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.576218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.576243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.576388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.576414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.576558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.576587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.576741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.576768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.576946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.576971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.577130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.577158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.577308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.577337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.577493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.577521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.577710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.577736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.577849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.577876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.578017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.578046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.578201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.578229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.578386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.578411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.578577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.578751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.578793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.578945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.578974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.579099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.579125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.579237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.579263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.579434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.579460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.579625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.579654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.579845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.579880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.580035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.580063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.580233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.580259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.580402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.580610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.580644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.580780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.580808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.580991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.581019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.581174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.581202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.581336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.581362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.581505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.581530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.827 qpair failed and we were unable to recover it. 00:33:03.827 [2024-07-22 12:28:11.581698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.827 [2024-07-22 12:28:11.581727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.581851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.581881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.582052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.582078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.582247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.582272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.582454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.582645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.582675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.582817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.582843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.582962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.582988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.583172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.583201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.583369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.583394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.583587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.583620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.583747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.583772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.583886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.583928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.584084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.584112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.584285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.584310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.584424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.584465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.584630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.584656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.584797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.584822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.585003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.585029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.585215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.585243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.585381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.585409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.585565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.585594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.585800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.585827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.585982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.586010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.586159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.586200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.586342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.586368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.586515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.586540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.586675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.586701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.586822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.586847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.586984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.587013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.587182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.587208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.587316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.587342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.587493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.587521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.587703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.587731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.587865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.587890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.588029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.588058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.588200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.588226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.588395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.588582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.588608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.588783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.588811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.588945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.588973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.589157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.589185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.589374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.589400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.589536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.589565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.589713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.589743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.589905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.589933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.590122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.590148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.590320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.590345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.590483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.590508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.590663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.590692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.590856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.590882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.591047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.591073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.591239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.591286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.591453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.591481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.591629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.591654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.591822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.591847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.828 [2024-07-22 12:28:11.592029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.828 [2024-07-22 12:28:11.592055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.828 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.592234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.592260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.592405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.592430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.592541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.592566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.592736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.592778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.592975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.593000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.593140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.593170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.593326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.593355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.593503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.593531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.593685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.593714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.593881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.593907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.594071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.594099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.594233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.594261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.594393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.594421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.594605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.594636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.594823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.594851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.595070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.595116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.595242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.595271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.595443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.595469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.595619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.595646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.595820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.595849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.596038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.596063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.596211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.596236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.596425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.596453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.596608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.596650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.596835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.596863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.597028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.597053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.597207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.597243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.597407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.597438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.597591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.597627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.597819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.597845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.597986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.598011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.598133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.598159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.598287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.598316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.598484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.598509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.598655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.598697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.598861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.598891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.599086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.599111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.599281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.599307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.599445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.599663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.599689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.599867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.599908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.600070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.600095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.600285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.600313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.600438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.600466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.600650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.600679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.600842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.600868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.601012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.601042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.829 [2024-07-22 12:28:11.601179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.829 [2024-07-22 12:28:11.601205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.829 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.601370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.601399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.601587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.601617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.601787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.601816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.601977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.602005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.602165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.602193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.602355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.602541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.602570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.602739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.602765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.602937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.602978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.603149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.603175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.603335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.603363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.603546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.603574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.603778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.603805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.603920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.603946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.604092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.604134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.604303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.604356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.604514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.604543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.604716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.604743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.604913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.604956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.605114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.605142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.605331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.605356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.605475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.605501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.605642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.605668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.605839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.605868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.606049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.606077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.606247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.606276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.606422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.606448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.606596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.606643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.606809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.606837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.606991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.607016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.607160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.607203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.607377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.607440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.607595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.607628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.607800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.607826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.608017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.608045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.608171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.608199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.608350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.608375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.608515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.608542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.608715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.608744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.608902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.608930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.609113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.609141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.609301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.609326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.609494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.609522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.609684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.609713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.609869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.609897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.610057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.610083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.610204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.610229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.610393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.610421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.610577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.610605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.610805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.610831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.610985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.611013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.611199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.611225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.611369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.611412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.611588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.611618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.611763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.611789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.611907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.830 [2024-07-22 12:28:11.611933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.830 qpair failed and we were unable to recover it. 00:33:03.830 [2024-07-22 12:28:11.612078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.612103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.612245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.612270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.612436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.612461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.612678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.612707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.612890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.612918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.613109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.613134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.613270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.613298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.613481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.613509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.613655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.613682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.613837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.613863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.614028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.614061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.614213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.614238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.614382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.614407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.614591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.614621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.614771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.614797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.614942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.614983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.615137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.615166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.615305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.615330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.615478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.615709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.615735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.615879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.615920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.616083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.616109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.616300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.616328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.616524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.616550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.616693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.616719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.616843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.616870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.617039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.617081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.617217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.617246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.617380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.617408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.617543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.617569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.617689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.617715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.617862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.617887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.618027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.618055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.618222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.618248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.618384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.618409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.618521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.618547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.618688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.618715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.618917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.618942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.619133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.619162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.619324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.619349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.619473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.619500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.619721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.619748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.619918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.619947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.620112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.620138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.620307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.620332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.620509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.620534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.620680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.620706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.620850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.620875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.621029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.621054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.621202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.621227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.621390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.621419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.621587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.621620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.621762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.621789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.621943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.621968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.622127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.831 [2024-07-22 12:28:11.622157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.831 qpair failed and we were unable to recover it. 00:33:03.831 [2024-07-22 12:28:11.622310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.622339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.622523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.622551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.622718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.622744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.622887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.622929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.623135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.623195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.623378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.623406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.623550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.623575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.623732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.623776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.623937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.623967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.624141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.624167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.624315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.624341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.624488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.624514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.624666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.624693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.624861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.624889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.625029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.625054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.625197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.625223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.625404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.625429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.625545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.625570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.625733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.625761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.625919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.625944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.626154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.626218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.626401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.626430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.626599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.626629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.626775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.626805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.626926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.626952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.627093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.627118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.627259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.627284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.627405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.627431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.627573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.627598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.627721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.627747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.627868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.627894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.628005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.628032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.628242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.628270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.628429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.628457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.628593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.628634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.628810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.628836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.629004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.629033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.629173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.629202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.629349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.629374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.629541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.629567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.629755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.629784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.629942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.629970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.630161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.630186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.630313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.630341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.630498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.630526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.630679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.630868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.630893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.631086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.631114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.631283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.832 [2024-07-22 12:28:11.631308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.832 qpair failed and we were unable to recover it. 00:33:03.832 [2024-07-22 12:28:11.631452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.631478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.631594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.631625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.631775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.631802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.631923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.631949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.632113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.632141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.632308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.632334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.632503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.632529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.632672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.632701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.632839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.632864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.632974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.632999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.633137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.633162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.633297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.633326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.633443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.633471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.633657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.633683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.633849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.633877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.634034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.634066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.634222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.634252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.634419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.634444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.634597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.634632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.634787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.634815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.635004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.635032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.635174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.635200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.635344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.635371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.635505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.635534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.635677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.635706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.635865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.635891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.636002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.636028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.636195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.636223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.636404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.636433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.636605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.636646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.636782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.636824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.637017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.637078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.637228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.637253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.637419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.637444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.637624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.637667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.637792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.637819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.637988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.638017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.638210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.638236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.638382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.638407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.638541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.638567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.638682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.638709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.638878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.638904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.639098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.639130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.639376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.639423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.639572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.639598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.639746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.639773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.639956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.639984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.640209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.640268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.640429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.640458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.640648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.640674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.640809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.640837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.640976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.641004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.641174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.641203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.641372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.641398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.641586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.641619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.641784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.833 [2024-07-22 12:28:11.641812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.833 qpair failed and we were unable to recover it. 00:33:03.833 [2024-07-22 12:28:11.641974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.642003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.642164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.642190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.642325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.642368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.642532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.642558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.642713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.642739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.642878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.642904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.643046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.643072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.643219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.643247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.643427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.643455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.643619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.643646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.643795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.643820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.643960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.643985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.644107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.644132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.644312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.644337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.644487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.644528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.644694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.644721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.644864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.644905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.645070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.645095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.645214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.645256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.645393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.645423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.645596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.645627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.645796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.645821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.646007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.646036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.646195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.646224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.646390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.646416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.646584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.646609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.646810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.646839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.647038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.647071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.647224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.647253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.647423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.647448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.647618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.647644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.647830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.647858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.648047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.648075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.648228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.648253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.648419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.648447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.648599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.648646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.648810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.648839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.649009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.649034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.649199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.649225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.649411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.649487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.649651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.649680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.649829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.649855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.649999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.650025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.650233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.650259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.650402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.650427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.650577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.650627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.650793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.650820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.650961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.650990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.651173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.651365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.651390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.651526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.651552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.651684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.651714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.651872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.651901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.652061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.652086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.652221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.652268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.652418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.834 [2024-07-22 12:28:11.652446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.834 qpair failed and we were unable to recover it. 00:33:03.834 [2024-07-22 12:28:11.652602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.652636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.652782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.652807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.652974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.653015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.653244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.653296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.653465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.653493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.653651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.653678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.653800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.654001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.654029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.654189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.654217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.654385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.654410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.654529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.654571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.654763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.654792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.654949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.654978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.655139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.655164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.655276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.655301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.655494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.655678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.655707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.655865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.655891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.656035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.656060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.656196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.656221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.656365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.656406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.656599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.656639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.656799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.656827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.656947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.656975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.657134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.657162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.657324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.657464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.657489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.657628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.657658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.657806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.657835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.658004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.658029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.658169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.658204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.658347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.658374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.658555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.658581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1146293 Killed "${NVMF_APP[@]}" "$@" 00:33:03.835 [2024-07-22 12:28:11.658757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.658784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.658912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.658941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:03.835 [2024-07-22 12:28:11.659099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.659126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.659242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:03.835 [2024-07-22 12:28:11.659268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:03.835 [2024-07-22 12:28:11.659435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.659464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.835 [2024-07-22 12:28:11.659649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.659690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.659837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.659872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.660028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.660057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.660228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.660254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.660411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.660437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.660628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.660657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.660779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.660807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.660953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.660979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.835 qpair failed and we were unable to recover it. 00:33:03.835 [2024-07-22 12:28:11.661179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.835 [2024-07-22 12:28:11.661208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.661370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.661399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.661550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.661578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.661790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.661817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.661944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.661980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.662135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.662161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.662335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.662362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1146849 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1146849 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1146849 ']' 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.836 [2024-07-22 12:28:11.663329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.663372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.836 [2024-07-22 12:28:11.663573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.663620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.836 [2024-07-22 12:28:11.663763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.663810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 12:28:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.663969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.664011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.664195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.664221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.664397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.664425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.664599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.664644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.664790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.664817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.664966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.664993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.665114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.665146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.665306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.665357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.665512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.665540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.665697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.665727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.665912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.665939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.666051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.666078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.666252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.666282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.666460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.666489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.666651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.666693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.666864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.666889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.667053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.667082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.667352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.667407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.667628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.667670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.667819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.667844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.668042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.668071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.668241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.668267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.668388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.668415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.668605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.668636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.668751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.668777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.668903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.668929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.669086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.669115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.669253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.669280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.669425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.669451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.669597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.669629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.669796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.669822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.669970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.669997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.670155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.670181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.670393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.670443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.670603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.670638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.670827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.670854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.670995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.671024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.671195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.671221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.671362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.671388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.671530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.671556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.671701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.671727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.671874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.671917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.672119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.836 [2024-07-22 12:28:11.672173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.836 qpair failed and we were unable to recover it. 00:33:03.836 [2024-07-22 12:28:11.672331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.672356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.672544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.672577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.672757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.672785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.672932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.672973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.673115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.673142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.673266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.673293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.673470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.673499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.673655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.673698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.673820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.673845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.673989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.674031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.674212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.674238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.674351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.674376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.674494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.674521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.674744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.674771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.674971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.675000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.675306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.675361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.675508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.675533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.675705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.675733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.675880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.675906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.676015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.676041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.676189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.676215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.676398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.676426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.676578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.676606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.676746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.676773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.676913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.676939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.677057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.677099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.677249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.677277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.677456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.677481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.677666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.677697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.677841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.677867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.678021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.678064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.678237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.678262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.678432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.678458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.678624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.678652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.678784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.678811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.678925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.678951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.679094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.679120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.679226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.679252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.679446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.679475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.679604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.679640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.679780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.679805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.679968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.679994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.680156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.680182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.680325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.680351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.680491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.680516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.680688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.680715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.680857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.680883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.681092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.681120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.681312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.681338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.681505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.681533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.681738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.681765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.681884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.681910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.682049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.682075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.682222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.682248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.682397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.682438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.682569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.682597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.837 [2024-07-22 12:28:11.682834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.837 [2024-07-22 12:28:11.682860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.837 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.683030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.683058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.683215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.683280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.683448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.683476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.683641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.683667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.683784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.683810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.683947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.683975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.684131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.684159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.684330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.684356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.684478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.684521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.684711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.684738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.684906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.684932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.685046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.685072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.685187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.685218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.685333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.685359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.685509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.685535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.685658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.685684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.685829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.685855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.686014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.686043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.686192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.686221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.686382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.686408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.686525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.686551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.686722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.686749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.686897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.686924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.687069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.687095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.687272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.687300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.687428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.687457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.687599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.687635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.687803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.687829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.687952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.687994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.688177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.688206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.688377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.688404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.688576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.688602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.688769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.688795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.688957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.688986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.689105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.689133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.689289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.689315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.689429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.689455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.689594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.689637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.689749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.689775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.689891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.689926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.690046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.690072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.690198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.690224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.690362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.690390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.690560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.690586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.690738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.690781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.690938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.690967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.691128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.691156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.838 qpair failed and we were unable to recover it. 00:33:03.838 [2024-07-22 12:28:11.691323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.838 [2024-07-22 12:28:11.691349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.691472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.691497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.691641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.691670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.691828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.691856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.692024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.692050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.692193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.692233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.692386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.692415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.692551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.692580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.692777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.692803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.692936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.692967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.693099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.693128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.693274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.693301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.693444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.693469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.693606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.693636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.693843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.693869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.694041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.694070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.694230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.694256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.694376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.694417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.694573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.694601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.694734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.694760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.694878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.694904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.695051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.695077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.695216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.695242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.695402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.695430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.695591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.695622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.695739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.695781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.695926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.695953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.696119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.696145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.696322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.696348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.696505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.696535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.696700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.696730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.696882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.696910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.697077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.697103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.697267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.697299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.697460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.697489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.697670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.697696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.839 qpair failed and we were unable to recover it. 00:33:03.839 [2024-07-22 12:28:11.697842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.839 [2024-07-22 12:28:11.697868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.698023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.698051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.698204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.698233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.698367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.698396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.698562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.698588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.698781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.698810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.698981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.699154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.699183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.699343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.699368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.699513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.699539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.699688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.699715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.699858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.699888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.700050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.700076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.700223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.700249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.700357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.700383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.700560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.700588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.700729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.700755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.700896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.700922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.701125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.701154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.701304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.701333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.701483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.701509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.701623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.701666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.701854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.701883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.702036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.702065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.702226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.702253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.702372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.702399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.702570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.702598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.702790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.702819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.702981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.703007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.703166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.703195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.703314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.703342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.703551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.703579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.703772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.703798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.703938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.703964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.704081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.704106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.704312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.704338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.704535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.704563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.704745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.704772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.704944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.704974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.705142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.705171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.705335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.705361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.840 qpair failed and we were unable to recover it. 00:33:03.840 [2024-07-22 12:28:11.705547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.840 [2024-07-22 12:28:11.705575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.705753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.705779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.705921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.705947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.706056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.706081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.706225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.706250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.706397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.706423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.706643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.706672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.706824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.706850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.706973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.706998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.707138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.707163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.707353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.707543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.707569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.707736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.707765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.707925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.707954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.708074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.708103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.708236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.708262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.708429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.708455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.708576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.708626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.708781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.708810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.708946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.708973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.709148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.709139] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:33:03.841 [2024-07-22 12:28:11.709175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.709215] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.841 [2024-07-22 12:28:11.709320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.709363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.709514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.709541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.709690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.841 [2024-07-22 12:28:11.709716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:03.841 qpair failed and we were unable to recover it. 00:33:03.841 [2024-07-22 12:28:11.709867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.709893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.710093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.710157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.710284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.710315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.710469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.710495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.710640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.710684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.710845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.710874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.711046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.711072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.711241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.711267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.711420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.711446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.711632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.711658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.711802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.711828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.711950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.711975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.712143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.712169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.712315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.712341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.712484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.712510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.712688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.712716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.712826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.712852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.712969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.712995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.713144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.713169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.713309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.713335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.713444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.713470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.713624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.713652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.713794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.713821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.713974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.714000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.714135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.714161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.714276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.714302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.714446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.714478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.714591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.128 [2024-07-22 12:28:11.714623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.128 qpair failed and we were unable to recover it. 00:33:04.128 [2024-07-22 12:28:11.714753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.714780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.714929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.714955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.715071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.715097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.715213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.715239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.715405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.715430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.715571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.715597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.715740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.715766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.715879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.715906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.716046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.716072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.716192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.716218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.716382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.716411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.716572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.716598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.716733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.716760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.716886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.716912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.717080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.717108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.717249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.717274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.717463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.717491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.717676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.717702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.717846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.717871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.718076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.718103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.718292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.718320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.718480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.718509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.718700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.718726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.718883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.718908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.719066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.719094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.719260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.719286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.719462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.719489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.719632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.719659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.719807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.719833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.719994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.720023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.720173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.720201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.720331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.720357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.720495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.720521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.720690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.720719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.720877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.720905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.721066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.721092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.721256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.721285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.721569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.721600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.721827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.721856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.722023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.722050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.722171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.129 [2024-07-22 12:28:11.722197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.129 qpair failed and we were unable to recover it. 00:33:04.129 [2024-07-22 12:28:11.722374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.722402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.722587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.722624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.722817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.722842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.722961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.722988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.723136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.723163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.723331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.723360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.723526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.723552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.723685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.723729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.723912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.723940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.724076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.724104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.724249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.724275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.724416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.724442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.724590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.724622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.724758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.724787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.724957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.724983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.725124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.725150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.725292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.725318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.725437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.725462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.725666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.725692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.725817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.725843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.725992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.726021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.726181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.726209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.726345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.726370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.726511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.726537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.726713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.726739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.726905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.726935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.727048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.727074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.727240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.727283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.727423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.727448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.727588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.727620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.727797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.727823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.727988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.728017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.728180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.728208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.728368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.728396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.728591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.728623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.728792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.728820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.729069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.729121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.729273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.729299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.729468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.729494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.729644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.729672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.729818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.729843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.130 qpair failed and we were unable to recover it. 00:33:04.130 [2024-07-22 12:28:11.730058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.130 [2024-07-22 12:28:11.730083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.730218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.730244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.730350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.730375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.730543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.730569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.730743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.730772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.730935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.730962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.731150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.731179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.731336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.731362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.731520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.731545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.731718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.731745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.731863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.731888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.732024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.732049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.732224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.732254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.732412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.732440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.732607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.732638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.732784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.732809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.732974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.733002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.733162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.733188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.733322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.733349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.733533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.733558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.733705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.733731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.733910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.733937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.734052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.734093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.734246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.734274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.734398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.734426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.734595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.734638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.734756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.734781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.734954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.734982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.735109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.735137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.735305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.735331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.735472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.735497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.735659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.735688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.735842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.735870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.736069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.736095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.736283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.736311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.736440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.736469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.736628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.736654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.736877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.736903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.737044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.737072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.737260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.737289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.737424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.737453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.131 qpair failed and we were unable to recover it. 00:33:04.131 [2024-07-22 12:28:11.737622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.131 [2024-07-22 12:28:11.737648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.737784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.737810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.738105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.738162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.738349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.738377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.738517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.738543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.738671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.738698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.738907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.738932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.739050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.739076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.739241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.739266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.739402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.739431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.739597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.739648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.739793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.739823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.739971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.739996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.740110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.740152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.740317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.740345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.740535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.740561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.740676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.740703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.740812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.740837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.740984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.741009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.741174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.741202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.741391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.741416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.741575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.741603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.741759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.741784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.741924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.741950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.742098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.742124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.742269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.742298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.742431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.742459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.742628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.742657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.742845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.742870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.742989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.743030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.743197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.743225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.743349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.743378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.743512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.743537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.132 qpair failed and we were unable to recover it. 00:33:04.132 [2024-07-22 12:28:11.743674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.132 [2024-07-22 12:28:11.743700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.743908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.743937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.744121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.744147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.744306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.744331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.744522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.744550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.744718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.744747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.744916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.744942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.745123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.745149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.745314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.745343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.745515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.745544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.745691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.745721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.745911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.745937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.746053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.746078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.746248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.746274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.746415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.746443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.746608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.746650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.746833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.746859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.747051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.747102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.747263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.747289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.747429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.747459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.747627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.747656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.747928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.747983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.748136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.748165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.748302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.748328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.748494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.748535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.133 [2024-07-22 12:28:11.748716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.748745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.748918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.748945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.749093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.749118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.749307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.749335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.749517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.749542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.749687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.749713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.749876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.749911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.750081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.750109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.750303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.750329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.750463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.750489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.750656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.750683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.750817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.750845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.751006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.751032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.751178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.751204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.751359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.751389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.751548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.751574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.133 [2024-07-22 12:28:11.751738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.133 [2024-07-22 12:28:11.751765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.133 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.751922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.751947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.752116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.752152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.752301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.752327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.752443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.752469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.752641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.752671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.752741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:04.134 [2024-07-22 12:28:11.752794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.752818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.752981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.753145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.753329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.753509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.753656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.753799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.753968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.753994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.754141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.754167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.754285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.754311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.754469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.754494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.754639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.754665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.754770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.754802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.754926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.754951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.755063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.755089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.755255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.755280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.755421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.755446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.755558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.755583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.755724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.755750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.755918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.755944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.756101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.756126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.756272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.756297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.756471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.756496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.756621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.756646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.756755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.756782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.756957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.756983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.757134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.757160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.757300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.757326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.757442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.757469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.757622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.757648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.757797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.757823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.757978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.758004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.758141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.758167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.758305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.758330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.758509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.758534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.134 [2024-07-22 12:28:11.758665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.134 [2024-07-22 12:28:11.758691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.134 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.758861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.758887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.759040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.759077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.759219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.759246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.759365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.759395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.759543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.759569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.759688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.759714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.759871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.759897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.760012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.760038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.760207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.760232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.760399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.760425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.760593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.760624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.760768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.760794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.760899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.760925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.761066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.761091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.761210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.761235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.761352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.761377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.761520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.761546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.761703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.761729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.761843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.761869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.762037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.762063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.762221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.762246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.762391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.762416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.762557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.762583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.762744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.762770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.762887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.762915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.763059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.763085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.763230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.763257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.763371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.763397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.763568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.763593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.763740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.763766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.763880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.763913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.764094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.764120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.764257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.764282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.764417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.764442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.764551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.764576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.764708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.764734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.764851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.764876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.765050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.765075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.765217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.765242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.765416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.765441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.765589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.135 [2024-07-22 12:28:11.765620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.135 qpair failed and we were unable to recover it. 00:33:04.135 [2024-07-22 12:28:11.765738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.765765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.765904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.765930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.766086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.766111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.766254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.766284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.766456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.766482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.766624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.766651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.766770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.766795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.766912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.766938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.767087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.767113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.767240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.767266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.767392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.767417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.767562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.767587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.767754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.767780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.767895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.767921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.768088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.768113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.768256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.768282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.768421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.768446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.768590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.768632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.768802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.768828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.768945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.768970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.769091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.769118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.769235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.769261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.769397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.769423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.769567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.769593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.769722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.769747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.769894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.769920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.770034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.770060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.770206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.770232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.770399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.770424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.770552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.770577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.770739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.770765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.770913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.770938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.771063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.771089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.771231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.771256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.771369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.771394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.136 [2024-07-22 12:28:11.771565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.136 [2024-07-22 12:28:11.771592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.136 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.771725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.771751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.771873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.771900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.772034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.772060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.772236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.772262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.772378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.772404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.772519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.772545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.772706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.772733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.772855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.772881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.773006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.773031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.773178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.773204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.773350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.773375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.773521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.773546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.773700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.773726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.773876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.773903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.774081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.774107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.774229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.774256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.774433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.774459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.774600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.774648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.774782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.774809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.774969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.774994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.775161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.775188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.775308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.775334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.775449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.775475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.775624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.775650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.775777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.775802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.775925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.775950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.776066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.776091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.776244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.776269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.776389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.776415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.776592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.776623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.776767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.776793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.776910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.776936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.777083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.777110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.777221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.777247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.777405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.777431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.777588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.777624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.777737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.777763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.777907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.777934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.778056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.778082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.778228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.778254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.778373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.137 [2024-07-22 12:28:11.778398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.137 qpair failed and we were unable to recover it. 00:33:04.137 [2024-07-22 12:28:11.778567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.778592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.778751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.778778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.778930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.778957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.779115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.779141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.779310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.779336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.779481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.779507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.779651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.779678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.779801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.779835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.779965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.779992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.780104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.780130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.780259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.780285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.780397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.780423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.780567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.780594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.780720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.780746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.780894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.780919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.781035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.781061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.781197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.781223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.781371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.781396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.781566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.781592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.781720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.781746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.781891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.781917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.782090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.782245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.782412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.782559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.782741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.782880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.782999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.783024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.783167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.783194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.783317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:04.138 [2024-07-22 12:28:11.783341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.783366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.783508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.783535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.783715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.783742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.783881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.783907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.784044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.784069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.784242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.784268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.784419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.784444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.784562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.784587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.784737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.784763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.784910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.784937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.785094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.785120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.785229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.138 [2024-07-22 12:28:11.785254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.138 qpair failed and we were unable to recover it. 00:33:04.138 [2024-07-22 12:28:11.785394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.785419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.785554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.785581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.785700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.785726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.785875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.785901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.786043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.786069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.786192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.786218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.786366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.786392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.786544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.786573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.786729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.786755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.786960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.786985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.787130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.787155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.787274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.787301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.787448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.787474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.787644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.787670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.787838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.787863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.788035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.788060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.788206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.788231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.788378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.788404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.788522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.788547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.788691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.788717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.788859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.788884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.789066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.789206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.789406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.789554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.789727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.789873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.789983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.790009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.790188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.790214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.790365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.790391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.790527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.790553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.790674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.790700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.790851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.790877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.791023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.791049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.791218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.791244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.791365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.791390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.791536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.791562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.791716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.791743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.791893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.791918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.792033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.792059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.792202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.792227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.139 qpair failed and we were unable to recover it. 00:33:04.139 [2024-07-22 12:28:11.792373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.139 [2024-07-22 12:28:11.792399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.792570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.792595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.792721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.792748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.792916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.792942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.793112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.793137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.793286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.793312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.793483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.793509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.793670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.793702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.793828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.793854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.794005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.794030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.794153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.794180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.794338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.794363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.794477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.794505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.794626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.794653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.794811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.794838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.795010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.795036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.795187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.795213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.795384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.795410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.795555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.795581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.795705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.795732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.795850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.795875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.796050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.796076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.796192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.796217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.796369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.796394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.796537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.796562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.796738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.796765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.796879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.796904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.797030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.797056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.797196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.797222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.797367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.797392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.797562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.797589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.797708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.797734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.797890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.797916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.798062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.798088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.798235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.798262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.798381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.798409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.798552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.798579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.798760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.798787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.798919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.798948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.799092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.799118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.799277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.799304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.140 [2024-07-22 12:28:11.799447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.140 [2024-07-22 12:28:11.799474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.140 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.799640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.799667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.799812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.799839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.799990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.800016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.800163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.800189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.800310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.800336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.800510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.800536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.800714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.800741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.800915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.800942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.801094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.801120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.801231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.801257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.801409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.801434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.801583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.801611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.801733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.801760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.801881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.801907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.802080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.802107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.802232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.802258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.802377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.802404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.802543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.802570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.802715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.802743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.802911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.802937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.803109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.803136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.803258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.803285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.803432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.803460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.803571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.803598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.803770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.803797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.803938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.803964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.804114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.804140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.804282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.804308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.804462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.804488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.804635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.804662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.804776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.804802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.804924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.804950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.805123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.805149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.805301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.805332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.805484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.141 [2024-07-22 12:28:11.805511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.141 qpair failed and we were unable to recover it. 00:33:04.141 [2024-07-22 12:28:11.805661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.805688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.805811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.805838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.805946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.805973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.806123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.806150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.806291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.806317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.806465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.806491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.806610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.806641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.806760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.806786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.806905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.806931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.807074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.807100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.807247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.807273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.807392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.807419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.807576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.807602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.807734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.807765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.807915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.807942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.808089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.808116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.808257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.808284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.808403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.808429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.808574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.808601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.808762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.808790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.808960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.808987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.809137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.809164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.809283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.809309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.809479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.809506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.809625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.809651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.809791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.809817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.809943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.809970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.810114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.810140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.810310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.810336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.810510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.810536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.810705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.810733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.810849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.810875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.811018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.811045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.811216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.811243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.811391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.811418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.811563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.811589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.811737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.811764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.811890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.811917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.812092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.812118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.812268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.812298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.812419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.812444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.142 qpair failed and we were unable to recover it. 00:33:04.142 [2024-07-22 12:28:11.812586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.142 [2024-07-22 12:28:11.812631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.812742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.812768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.812910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.812936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.813055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.813082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.813251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.813278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.813405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.813432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.813573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.813599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.813745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.813772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.813917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.813945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.814094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.814120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.814265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.814291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.814463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.814489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.814626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.814669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.814817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.814843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.814982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.815008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.815177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.815203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.815314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.815340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.815500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.815526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.815673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.815700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.815845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.815872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.815989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.816015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.816143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.816169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.816323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.816349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.816484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.816510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.816634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.816662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.816824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.816854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.816997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.817024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.817164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.817190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.817343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.817369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.817508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.817535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.817691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.817718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.817868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.817894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.818037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.818063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.818230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.818256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.818408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.818434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.818557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.818583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.818731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.818758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.818894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.818920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.819030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.819057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.819221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.819247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.819359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.819385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.819557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.143 [2024-07-22 12:28:11.819584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.143 qpair failed and we were unable to recover it. 00:33:04.143 [2024-07-22 12:28:11.819742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.819769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.819909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.819936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.820052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.820079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.820220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.820246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.820394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.820420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.820579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.820605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.820752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.820778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.820935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.820963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.821132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.821158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.821326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.821352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.821511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.821537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.821716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.821744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.821881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.821907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.822078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.822104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.822220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.822246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.822380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.822407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.822550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.822577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.822726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.822753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.822896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.822922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.823067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.823093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.823241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.823269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.823444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.823471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.823637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.823664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.823790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.823817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.823988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.824022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.824169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.824195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.824333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.824360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.824479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.824506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.824650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.824677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.824797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.824825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.824974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.825001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.825148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.825174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.825328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.825354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.825496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.825523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.825664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.825691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.825827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.825854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.825975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.826001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.826146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.826173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.826402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.826429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.826600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.826635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.144 [2024-07-22 12:28:11.826784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.144 [2024-07-22 12:28:11.826810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.144 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.826923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.826949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.827118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.827144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.827285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.827311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.827454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.827480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.827630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.827657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.827773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.827799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.827939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.827964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.828115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.828140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.828282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.828308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.828431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.828459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.828585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.828619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.828751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.828777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.828949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.829090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.829116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.829290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.829316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.829438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.829463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.829620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.829646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.829784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.829810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.829923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.829950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.830074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.830100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.830269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.830295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.830417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.830444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.830597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.830630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.830777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.830803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.830956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.830982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.831101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.831127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.831241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.831266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.831411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.831437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.831580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.831607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.831768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.831794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.831911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.831936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.832104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.832130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.832277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.832302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.832450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.832476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.832624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.832651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.832792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.145 [2024-07-22 12:28:11.832818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.145 qpair failed and we were unable to recover it. 00:33:04.145 [2024-07-22 12:28:11.832926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.832952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.833124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.833149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.833295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.833321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.833489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.833515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.833683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.833710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.833857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.833883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.834053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.834079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.834199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.834225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.834366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.834392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.834543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.834570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.834689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.834715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.834858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.834883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.835033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.835059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.835205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.835231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.835377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.835405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.835550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.835580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.835759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.835785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.835937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.835962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.836080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.836105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.836254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.836280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.836426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.836453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.836602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.836633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.836777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.836918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.836944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.837074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.837099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.837229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.837254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.837401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.837427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.837581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.837607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.837745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.837771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.837885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.837911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.838025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.838053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.838227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.838253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.838424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.838451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.838562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.838588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.838739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.838767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.838918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.838945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.839092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.839118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.839257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.839283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.839426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.839452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.839576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.839602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.146 [2024-07-22 12:28:11.839769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.146 [2024-07-22 12:28:11.839795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.146 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.839969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.839994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.840143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.840173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.840293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.840319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.840464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.840489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.840640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.840666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.840833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.841029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.841055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.841196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.841223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.841369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.841397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.841569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.841595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.841749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.841776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.841900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.841926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.842072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.842098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.842208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.842235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.842353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.842379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.842530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.842557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.842700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.842727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.842876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.842903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.843026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.843052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.843199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.843226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.843353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.843379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.843547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.843574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.843728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.843755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.843868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.843894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.844067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.844094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.844246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.844273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.844415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.844582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.844608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.844767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.844793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.844966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.844993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.845103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.845130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.845288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.845315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.845426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.845453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.845576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.845602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.845735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.845762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.845886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.845913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.846057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.846084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.846228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.846254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.846394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.846420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.846565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.846591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.147 [2024-07-22 12:28:11.846753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.147 [2024-07-22 12:28:11.846780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.147 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.846903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.846932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.847103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.847133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.847260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.847287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.847438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.847464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.847618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.847645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.847760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.847787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.847915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.847941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.848052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.848078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.848198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.848226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.848450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.848477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.848626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.848654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.848794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.848820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.848990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.849016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.849160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.849186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.849327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.849353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.849468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.849494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.849625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.849659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.849837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.849864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.850041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.850067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.850226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.850252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.850374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.850403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.850532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.850558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.850684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.850712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.850882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.850908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.851053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.851079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.851198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.851224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.851366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.851391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.851540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.851566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.851714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.851866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.851892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.852011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.852037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.852179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.852205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.852355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.852381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.852503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.852529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.852686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.852713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.852835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.852861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.853003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.853029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.853149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.853175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.853353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.853379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.853527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.853553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.148 [2024-07-22 12:28:11.853699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.148 [2024-07-22 12:28:11.853725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.148 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.853851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.853877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.854023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.854050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.854193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.854220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.854352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.854378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.854518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.854544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.854664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.854692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.854803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.854830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.855000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.855026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.855176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.855202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.855426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.855453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.855598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.855629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.855748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.855774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.855943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.855969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.856112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.856138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.856277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.856302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.856447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.856473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.856624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.856651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.856798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.856825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.856980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.857143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.857291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.857436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.857606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.857762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.857934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.857959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.858098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.858124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.858259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.858287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.858404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.858430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.858571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.858604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.858774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.858801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.858951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.858977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.859099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.859125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.859298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.859324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.859447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.859473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.859637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.859664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.859782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.149 [2024-07-22 12:28:11.859808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.149 qpair failed and we were unable to recover it. 00:33:04.149 [2024-07-22 12:28:11.859921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.859947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.860170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.860197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.860367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.860393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.860507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.860533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.860677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.860704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.860812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.860837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.860992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.861137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.861304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.861480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.861658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.861799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.861969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.861996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.862166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.862193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.862364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.862390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.862560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.862586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.862815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.862841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.862984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.863149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.863288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.863467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.863631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.863830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.863973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.863999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.864122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.864148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.864320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.864346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.864495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.864521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.864744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.864772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.864941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.864967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.865110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.865136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.865302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.865328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.865469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.865495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.865653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.865680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.865827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.865854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.865974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.866001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.866175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.866202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.866327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.866353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.866516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.866542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.866653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.866680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.866849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.866876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.866998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.867024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.150 [2024-07-22 12:28:11.867166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.150 [2024-07-22 12:28:11.867192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.150 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.867316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.867342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.867479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.867506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.867653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.867680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.867824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.867850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.867970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.867996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.868122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.868148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.868272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.868298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.868439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.868466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.868588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.868620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.868766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.868793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.868941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.868967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.869110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.869136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.869280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.869308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.869482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.869508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.869625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.869652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.869798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.869824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.869951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.870117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.870143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.870290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.870319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.870463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.870490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.870609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.870652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.870822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.870848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.871020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.871046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.871189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.871216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.871340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.871367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.871548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.871573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.871734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.871760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.871910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.871936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.872081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.872243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.872381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.872531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.872720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.872864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.872983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.873009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.873171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.873197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.873355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.873382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.873505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.873532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.873673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.151 [2024-07-22 12:28:11.873690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.151 [2024-07-22 12:28:11.873710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.151 [2024-07-22 12:28:11.873715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.151 [2024-07-22 12:28:11.873725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.151 qpair failed and we were unable to recover it. 00:33:04.151 [2024-07-22 12:28:11.873738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.151 [2024-07-22 12:28:11.873749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.151 [2024-07-22 12:28:11.873854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.873879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.873837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:04.152 [2024-07-22 12:28:11.873871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:04.152 [2024-07-22 12:28:11.873995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.874020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.873923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:04.152 [2024-07-22 12:28:11.873925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:04.152 [2024-07-22 12:28:11.874152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.874178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.874298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.874328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.874476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.874501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.874698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.874724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.874851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.874877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.874997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.875195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.875339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.875482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.875634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.875799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.875971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.875997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.876120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.876147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.876260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.876286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.876460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.876486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.876633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.876659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.876800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.876826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.876974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.876999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.877139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.877284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.877434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.877580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.877723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.877870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.877989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.878161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.878334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.878480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.878640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.878784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.878935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.878960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.879100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.879126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.879249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.879274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.879399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.879426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.879546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.879572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.879716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.879743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.879860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.879887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.880005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.880031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.152 qpair failed and we were unable to recover it. 00:33:04.152 [2024-07-22 12:28:11.880266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.152 [2024-07-22 12:28:11.880292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.880412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.880438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.880662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.880688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.880815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.880841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.880987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.881157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.881331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.881464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.881607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.881755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.881895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.881920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.882073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.882099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.882218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.882244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.882367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.882394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.882621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.882647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.882815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.882840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.883063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.883089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.883213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.883238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.883406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.883432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.883541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.883567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.883679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.883706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.883858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.883884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.884028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.884194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.884367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.884529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.884695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.884843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.884991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.885126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.885269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.885442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.885594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.885745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.885911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.885937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.886046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.886071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.886211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.886237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.886423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.886449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.886569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.886595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.886767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.886793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.153 qpair failed and we were unable to recover it. 00:33:04.153 [2024-07-22 12:28:11.886906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.153 [2024-07-22 12:28:11.886932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.887060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.887086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.887228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.887253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.887395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.887421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.887546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.887572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.887727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.887754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.887871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.887897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.888047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.888073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.888201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.888227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.888369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.888395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.888535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.888561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.888709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.888737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.888883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.888909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.889052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.889078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.889196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.889224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.889335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.889361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.889529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.889556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.889728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.889754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.889869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.889895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.890944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.890969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.891083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.891109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.891245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.891271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.891412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.891439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.891562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.891588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.891715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.891742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.891883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.891908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.892053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.892086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.892210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.892236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.892407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.892433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.892548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.892573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.892784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.892810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.892931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.154 [2024-07-22 12:28:11.892957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.154 qpair failed and we were unable to recover it. 00:33:04.154 [2024-07-22 12:28:11.893125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.893152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.893269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.893297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.893436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.893462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.893604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.893647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.893765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.893791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.893943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.893969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.894138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.894164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.894279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.894305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.894429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.894454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.894604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.894636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.894762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.894788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.894910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.894936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.895081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.895260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.895404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.895540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.895721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.895868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.895985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.896164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.896306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.896478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.896631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.896774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.896906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.896932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.897958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.897983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.898131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.898156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.898276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.898302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.898431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.898457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.898627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.898653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.898771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.898797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.898948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.898973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.899086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.899112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.899258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.899284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.899434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.899459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.155 qpair failed and we were unable to recover it. 00:33:04.155 [2024-07-22 12:28:11.899580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.155 [2024-07-22 12:28:11.899605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.899733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.899759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.899879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.899905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.900965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.900993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.901132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.901158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.901274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.901300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.901413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.901440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.901561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.901587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.901711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.901738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.901862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.901888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.902963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.902989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.903136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.903163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.903321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.903347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.903496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.903522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.903638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.903664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.903781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.903807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.903911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.903936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.904083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.904109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.904230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.904256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.904381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.904407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.904524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.904550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.904695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.904721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.904867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.904903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.905048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.905074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.905211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.905237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.905349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.905374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.905493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.905519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.905675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.905701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.905845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.905870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.156 [2024-07-22 12:28:11.906045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.156 [2024-07-22 12:28:11.906071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.156 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.906223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.906248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.906418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.906443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.906591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.906621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.906735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.906760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.906902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.906927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.907955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.907980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.908104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.908129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.908291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.908317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.908462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.908489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.908654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.908681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.908825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.908851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.908972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.908998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.909119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.909145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.909290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.909317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.909368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66490 (9): Bad file descriptor 00:33:04.157 [2024-07-22 12:28:11.909589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.909639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.909764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.909791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.909916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.909944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.910059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.910085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.910234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.910262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.910451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.910476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.910633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.910660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.910808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.910833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.910980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.911131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.911286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.911450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.911643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.911820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.911960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.911986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.912103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.912128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.912253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.912278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.912421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.912446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.912553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.912578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.912690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.912716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.157 qpair failed and we were unable to recover it. 00:33:04.157 [2024-07-22 12:28:11.912844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.157 [2024-07-22 12:28:11.912870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.912989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.913126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.913297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.913435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.913597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.913779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.913941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.913972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.914117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.914144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.914259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.914285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.914436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.914463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.914619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.914647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.914761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.914787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.914924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.914951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.915068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.915095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.915236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.915263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.915391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.915418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.915562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.915589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.915725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.915753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.915876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.915902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.916053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.916195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.916358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.916492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.916676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.916850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.916976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.917143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.917280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.917444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.917582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.917793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.917968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.917994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.918165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.918191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.918414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.918440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.918581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.918607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.158 qpair failed and we were unable to recover it. 00:33:04.158 [2024-07-22 12:28:11.918762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.158 [2024-07-22 12:28:11.918789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.918935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.918962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.919961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.919987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.920120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.920147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.920283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.920326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.920452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.920485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.920603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.920642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.920811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.920837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.920960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.920987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.921107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.921134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.921283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.921312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.921424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.921450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.921562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.921588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.921717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.921743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.921882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.921907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.922044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.922183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.922318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.922481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.922643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.922814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.922977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.923003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.923153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.923180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.923308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.923334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.923453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.923481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.923636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.923670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.923786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.923811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.923974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.924131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.924275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.924418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.924558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.924720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.924875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.924901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.925032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.159 [2024-07-22 12:28:11.925060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.159 qpair failed and we were unable to recover it. 00:33:04.159 [2024-07-22 12:28:11.925209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.925236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.925359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.925386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.925511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.925539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.925687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.925715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.925825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.925851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.925998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.926142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.926285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.926455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.926622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.926798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.926943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.926971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.927114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.927250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.927398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.927566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.927708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.927879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.927995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.928140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.928280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.928450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.928602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.928784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.928935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.928961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.929080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.929108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.929225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.929251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.929397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.929424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.929542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.929569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.929701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.929729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.929875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.929901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.930022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.930048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.930191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.930217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.930408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.930434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.930579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.930605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.930738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.930766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.930922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.930950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.931070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.931223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.931251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.931385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.931411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.160 [2024-07-22 12:28:11.931532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.160 [2024-07-22 12:28:11.931558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.160 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.931710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.931737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.931854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.931881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.932030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.932203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.932398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.932570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.932733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.932875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.932996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.933024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.933171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.933198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.933319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.933347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.933521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.933548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.933764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.933808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.933961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.933988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.934134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.934279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.934421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.934569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.934712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.934856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.934974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.935151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.935300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.935440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.935626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.935828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.935972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.935998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.936107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.936134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.936287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.936316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.936438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.936466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.936639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.936667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.936783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.936810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.936989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.937016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.937190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.937217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.937355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.937382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.937524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.937551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.937725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.937766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.937911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.937939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.938065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.938092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.938235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.938261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.938374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.161 [2024-07-22 12:28:11.938401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.161 qpair failed and we were unable to recover it. 00:33:04.161 [2024-07-22 12:28:11.938531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.938559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.938678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.938706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.938828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.938853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.939012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.939039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.939213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.939240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.939358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.939385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.939504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.939530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.939676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.939702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.939824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.939850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.940038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.940186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.940365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.940508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.940689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.940842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.940995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.941023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.941146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.941172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.941318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.941346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.941463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.941491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.941634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.941670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.941785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.941811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.941983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.942154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.942292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.942457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.942603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.942815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.942968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.942994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.943108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.943136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.943312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.943339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.943452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.943478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.943623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.943650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.943765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.943790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.943919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.943945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.944072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.944099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.162 qpair failed and we were unable to recover it. 00:33:04.162 [2024-07-22 12:28:11.944213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.162 [2024-07-22 12:28:11.944240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.944418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.944445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.944559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.944586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.944744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.944771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.944900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.944933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.945085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.945112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.945228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.945255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.945409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.945435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.945546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.945573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.945727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.945754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.945862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.945888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.946038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.946064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.946169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.946196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.946316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.946342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.946525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.946552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.946705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.946732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.946860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.946887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.947928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.947958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.948080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.948108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.948237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.948264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.948427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.948453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.948571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.948602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.948740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.948767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.948892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.948920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.949091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.949262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.949395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.949532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.949686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.949867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.949978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.950004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.950144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.950170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.950286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.950313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.950466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.950492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.950607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.163 [2024-07-22 12:28:11.950641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.163 qpair failed and we were unable to recover it. 00:33:04.163 [2024-07-22 12:28:11.950802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.950829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.950946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.950972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.951077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.951103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.951212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.951238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.951393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.951420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.951535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.951561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.951741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.951785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.951918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.951945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.952069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.952097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.952219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.952246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.952372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.952404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.952524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.952550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.952689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.952717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.952841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.952874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.953025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.953052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.953206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.953232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.953384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.953409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.953527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.953554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.953706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.953732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.953874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.953901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.954948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.954974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.955133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.955159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.955274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.955300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.955444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.955470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.955595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.955626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.955748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.955774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.955920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.955946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.956058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.956084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.956237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.956262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.956407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.956433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.956578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.956603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.956732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.956759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.956874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.956900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.957046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.957073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.957194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.164 [2024-07-22 12:28:11.957225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.164 qpair failed and we were unable to recover it. 00:33:04.164 [2024-07-22 12:28:11.957369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.957395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.957526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.957552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.957698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.957726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.957880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.957907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.958047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.958074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.958227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.958252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.958367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.958394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.958523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.958551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.958726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.958752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.958870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.958895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.959032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.959057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.959199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.959225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.959365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.959390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.959566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.959592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.959711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.959737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.959844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.959870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.960888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.960915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.961954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.961980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.962133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.962161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.962307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.962333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.962451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.962478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.962586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.962619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.962772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.962800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.962951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.962977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.963207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.963233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.963356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.963384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.963526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.963553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.963780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.963808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.963944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.963986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.165 qpair failed and we were unable to recover it. 00:33:04.165 [2024-07-22 12:28:11.964156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.165 [2024-07-22 12:28:11.964198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.964350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.964377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.964498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.964524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.964674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.964702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.964846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.964873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.964993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.965130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.965295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.965472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.965621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.965772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.965928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.965954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.966101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.966133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.966273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.966299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.966449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.966488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.966643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.966670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.966787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.966812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.966923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.966949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.967091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.967117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.967234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.967260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.967406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.967434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.967598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.967648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.967784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.967813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.967927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.967954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.968098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.968252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.968407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.968560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.968696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.968844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.968986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.969153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.969293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.969444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.969623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.969794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.166 [2024-07-22 12:28:11.969970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.166 [2024-07-22 12:28:11.969996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.166 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.970141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.970167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.970283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.970310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.970440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.970468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.970619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.970645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.970762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.970788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.970927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.970953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.971104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.971241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.971413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.971587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.971734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.971867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.971981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.972008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.972150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.972176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.972300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.972327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.972477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.972508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.972655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.972682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.972806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.972833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.972976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.973168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.973318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.973466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.973644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.973784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.973926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.973953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.974067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.974094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.974237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.974263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.974376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.974403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.974544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.974585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.974724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.974764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.974926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.974954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.975079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.975105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.975252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.975278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.975427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.975454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.975603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.975636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.975750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.975888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.975914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.976059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.976085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.976236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.976263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.976413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.167 [2024-07-22 12:28:11.976439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.167 qpair failed and we were unable to recover it. 00:33:04.167 [2024-07-22 12:28:11.976588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.976624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.976744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.976771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.976916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.976956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.977955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.977981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.978128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.978155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.978278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.978303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.978447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.978475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.978641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.978682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.978811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.978839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.978997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.979143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.979340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.979477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.979621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.979805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.979956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.979986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.980125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.980151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.980269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.980295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.980444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.980470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.980592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.980624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.980749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.980775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.980887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.980913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.981082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.981108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.981265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.981296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.981445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.981472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.981622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.981649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.981800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.981828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.982055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.982082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.982203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.982230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.982372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.982399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.982570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.982597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.982720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.982746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.982897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.982923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.983037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.983064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.168 [2024-07-22 12:28:11.983180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.168 [2024-07-22 12:28:11.983208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.168 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.983357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.983383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.983529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.983560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.983705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.983732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.983859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.983887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.984039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.984191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.984391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.984534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.984688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.984829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.984978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.985114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.985253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.985396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.985572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.985780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.985936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.985962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.986101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.986269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.986411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.986564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.986711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.986857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.986989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.987015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.987155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.987181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.987308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.987334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.987464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.987505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.987640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.987681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.987840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.987867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.987982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.988128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.988333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.988473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.988626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.988764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.988914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.988941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.989064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.989095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.989243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.989268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.989426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.989451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.169 [2024-07-22 12:28:11.989610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.169 [2024-07-22 12:28:11.989657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.169 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.989790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.989830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.989980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.990157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.990301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.990445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.990624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.990765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.990936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.990962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.991094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.991121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.991268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.991295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.991415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.991442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.991557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.991584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.991747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.991774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.991899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.991925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.992944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.992969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.993092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.993119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.993228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.993255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.993399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.993426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.993571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.993597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.993731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.993758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.993875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.993901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.994872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.994898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.170 [2024-07-22 12:28:11.995909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.170 [2024-07-22 12:28:11.995934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.170 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.996044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.996070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.996197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.996227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.996358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.996383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.996520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.996561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.996725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.996754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.996875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.996902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.997919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.997945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.998121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.998147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.998262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.998289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.998423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.998452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.998574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.998600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.998744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.998770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.998886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.998912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.999047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.999073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.999209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.999235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.999361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.999389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.999565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.999626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.999755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.999782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:11.999925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:11.999952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:12.000100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:12.000127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:12.000357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:12.000383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:12.000519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:12.000546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.171 [2024-07-22 12:28:12.000704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.171 [2024-07-22 12:28:12.000745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.171 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.000896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.000923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.001928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.001953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.002077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.002103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.002243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.002269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.002393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.002419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.002556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.002582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.002707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.002739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.002866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.002892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.003070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.003097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.003235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.003260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.003370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.003396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.003526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.003567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.003710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.003739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.003868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.004956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.004983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.005097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.005123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.005236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.005263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.005412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.005439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.005566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.005593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.005746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.005772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.005894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.005921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.006092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.006118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.006238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.006265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.006406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.006433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.006563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.006590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.006747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.006774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.006916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.006943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.007073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.007112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.007260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.007287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.007415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.007445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.007572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.007598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.007728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.007753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.007898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.007924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.008050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.008077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.172 [2024-07-22 12:28:12.008193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.172 [2024-07-22 12:28:12.008219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.172 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.008327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.008361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.008492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.008518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.008654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.008699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.008830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.008857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.008984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.009155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.009296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.009444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.009582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.009735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.009917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.009943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.010058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.173 [2024-07-22 12:28:12.010085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:04.173 [2024-07-22 12:28:12.010236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.010263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.010405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:04.173 [2024-07-22 12:28:12.010432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:04.173 [2024-07-22 12:28:12.010574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.010600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.173 [2024-07-22 12:28:12.010729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.010757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.010900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.010926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.011104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.011131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.011268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.011294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.011412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.011439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.011566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.011594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.011723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.011750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.011866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.011892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.012945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.012972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.013118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.013145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.013258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.013284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.013424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.013451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.013601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.013653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.013810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.013838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.013963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.013990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.014129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.014156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.014271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.014297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.014415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.014441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.014574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.014600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.014734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.014761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.014879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.014906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.015080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.015106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.015224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.015251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.015411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.015438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.015583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.015629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.015772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.015799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.015943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.015970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.016085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.016111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.016229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.016257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.016401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.016428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.016541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.016567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.016685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.016713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.016840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.016866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.173 [2024-07-22 12:28:12.017007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.173 [2024-07-22 12:28:12.017033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.173 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.017150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.017178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.017305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.017333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.017470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.017496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.017644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.017686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.017821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.017849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.017982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.018010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.018131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.018158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.018315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.018341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.018568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.018596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.018763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.018789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.018936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.018965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.019076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.019102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.019224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.019251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.019368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.019395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.019520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.019546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.019674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.019713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.019851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.019891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.020970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.020996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.021141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.021167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.021279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.021305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.021426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.021452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.021593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.021633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.021766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.021791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.021916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.021948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.022096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.022123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.022235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.022261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.022406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.022446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.022570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.022598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.022779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.022820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.022961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.022989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.023100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.023126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.023276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.023302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.023432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.023459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.023570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.023597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.023757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.023784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.023903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.023929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.024075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.024102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.024229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.024255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.024380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.024406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.024562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.024589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.024729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.024756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.024868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.174 [2024-07-22 12:28:12.024894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.174 qpair failed and we were unable to recover it. 00:33:04.174 [2024-07-22 12:28:12.025038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.025232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.025374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.025517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.025665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.025812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.025950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.025976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.026088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.026114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.026224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.026250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.026389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.026429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.026601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.026649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.026781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.026809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.026929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.026955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.027099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.027239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.027395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.027536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.027710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.027856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.027990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.028135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.028287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.028462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.028601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.028765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.028936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.028962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.029114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.029142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.029269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.029295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.029407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.029444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.029628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.029676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.029812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.029841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.030000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.030027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.030165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.175 [2024-07-22 12:28:12.030202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.175 qpair failed and we were unable to recover it. 00:33:04.175 [2024-07-22 12:28:12.030314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.030341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.030491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.030517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.030670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.030703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.030859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.030885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.030999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.031169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.031305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.031478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.441 [2024-07-22 12:28:12.031619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:04.441 [2024-07-22 12:28:12.031772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.441 [2024-07-22 12:28:12.031937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.031964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.441 [2024-07-22 12:28:12.032105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.032132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.032287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.032314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.032437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.032463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.032608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.032639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.032760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.032786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.032901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.032933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.033106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.033131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.033273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.033426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.033453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.033600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.033633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.033769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.033795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.033908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.033934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.034079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.034114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.034295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.034321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.034459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.034485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.034600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.034635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.034774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.034948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.034976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.035118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.035144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.035285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.035311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.035435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.035462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.035580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.035606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.035748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.441 [2024-07-22 12:28:12.035774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.441 qpair failed and we were unable to recover it. 00:33:04.441 [2024-07-22 12:28:12.035917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.035945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.036068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.036095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.036218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.036244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.036357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.036383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.036507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.036533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.036665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.036693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.036806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.036832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.037939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.037965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.038107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.038133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.038247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.038273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.038388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.038414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa58450 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.038571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.038611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.038796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.038824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.038942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.038968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.039101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.039132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.039249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.039275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.039448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.039473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.039589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.039623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.039748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.039775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.039922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.039948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.040070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.040096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.040246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.040273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.040411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.040437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.040553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.040591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.040763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.040804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.040938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.040965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.041107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.041133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.041283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.041309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.041469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.041497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.041725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.041753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.041870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.041898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.042050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.042077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.042301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.042327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.042474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.042501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.042677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.042704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.042848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.042874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.043030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.043056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.043199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.043225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.043370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.043397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.043513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.043541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.043737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.043778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.043929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.043956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.044118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.044144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.044293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.044321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.044440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.044467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.044621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.044647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.044786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.044813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.044948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.044984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.045126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.045151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.045304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.045333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.045449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.045476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.045600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.045633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.045753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.045781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.045897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.045934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.046163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.046194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.046323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.046351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.046493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.046520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.046675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.046702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.046823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.046849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.046982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.047008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.047155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.047182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.442 qpair failed and we were unable to recover it. 00:33:04.442 [2024-07-22 12:28:12.047324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.442 [2024-07-22 12:28:12.047350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.047475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.047501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.047609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.047640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.047857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.048021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.048056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.048174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.048201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.048356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.048383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.048531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.048560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.048685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.048711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.048841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.048867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.049099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.049127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.049272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.049299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.049439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.049465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.049610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.049644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.049810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.049837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.049971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.049998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.050140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.050166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.050292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.050319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.050463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.050489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.050618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.050653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.050774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.050802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.050943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.050969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.051138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.051164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.051388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.051414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.051651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.051685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.051803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.051830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.051981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.052007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.052150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.052178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.052305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.052331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.052466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.052507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.052651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.052686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.052817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.052845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.052989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.053135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.053295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.053473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.053624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.053771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.053921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.053947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.054110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.054260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.054286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.054442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.054467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.054586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.054620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.054745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.054771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.054915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.054941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.055120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.055146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.055266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.055292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.055421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.055447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.055583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.055610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.055754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.055781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.055895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.055921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.056051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.056078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.056192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.056218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.056342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.056368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.056526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.056568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.056749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.056776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.056890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.056924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.057074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.057100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.057220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.057247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.057366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.057392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.057538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.057564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.057699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.057727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.057841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 Malloc0 00:33:04.443 [2024-07-22 12:28:12.057867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.058016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.058053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-07-22 12:28:12.058179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.058206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.443 [2024-07-22 12:28:12.058353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.443 [2024-07-22 12:28:12.058379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.443 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.444 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.444 [2024-07-22 12:28:12.058515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.058541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.444 [2024-07-22 12:28:12.058675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.058703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.058824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.058850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.058978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.059135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.059278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.059472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.059643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.059793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.059949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.059975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.060121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.060147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.060295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.060321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0554000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.060438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.060466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.060624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.060651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.060809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.060835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.060961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.060989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.061144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.061170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.061278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.061313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.061442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.061468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.061593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.061585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.444 [2024-07-22 12:28:12.061625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.061779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.061805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.061922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.061947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.062072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.062097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.062242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.062269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.062415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.062440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.062573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.062600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.062745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.062771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.062887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.062916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.063040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.063066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.063209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.063235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.063383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.063409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.063532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.063557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.063730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.063771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.063906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.063933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.064086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.064114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.064345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.064372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.064494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.064521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.064676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.064702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.064824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.064851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.064997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.065148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.065288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.065453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.065625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.065805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.065960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.065994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.066113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.066139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.066277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.066303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.066459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.066484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.066592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.066623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.066780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.066806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.066950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.066976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.067097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.067124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.067268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.067295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.067424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.067450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.067570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.067597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.067738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.067907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.067943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.068088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.068114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-07-22 12:28:12.068235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.444 [2024-07-22 12:28:12.068262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.068375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.068401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.068544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.068569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.068746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.068787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.068960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.068989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.069114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.069141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.069336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.069362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.069503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.069529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.069674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.069701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.445 [2024-07-22 12:28:12.069821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.069847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:04.445 [2024-07-22 12:28:12.069987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.070015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.445 [2024-07-22 12:28:12.070157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.445 [2024-07-22 12:28:12.070183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.070324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.070350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.070497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.070523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.070637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.070670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.070790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.070818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.070939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.070966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.071085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.071110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.071249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.071274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.071396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.071422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.071597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.071630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.071750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.071776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.071919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.071945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.072084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.072109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.072254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.072280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.072399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.072425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.072567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.072593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.072720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.072746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.072904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.072930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.073069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.073095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.073202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.073227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.073334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.073359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.073511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.073694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.073720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.073909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.073936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.074070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.074096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.074288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.074314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.074434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.074460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.074582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.074609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.074775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.074802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.074919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.074945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.075087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.075113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.075239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.075267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.075393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.075419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.075544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.075571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.075721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.075761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.075892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.075920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.076068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.076093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.076208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.076235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.076352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.076380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.076498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.076524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.076714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.076753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.076878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.076903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.077056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.077089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.077228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.077264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.077383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.077409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.077539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.077565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.077697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.077726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.445 [2024-07-22 12:28:12.077867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.077903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:04.445 [2024-07-22 12:28:12.078061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.078087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.445 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.445 [2024-07-22 12:28:12.078235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.078260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.078372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.078398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.078513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.078539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.078671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.078699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.078863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.078890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.079004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.079030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.079146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.445 [2024-07-22 12:28:12.079173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.445 qpair failed and we were unable to recover it. 00:33:04.445 [2024-07-22 12:28:12.079312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.079338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.079455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.079481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.079606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.079640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.079803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.079830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.079985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.080158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.080300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.080448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.080603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.080819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.080971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.080997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.081122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.081148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.081269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.081296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.081443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.081469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.081739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.081766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.081878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.081914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.082087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.082119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.082240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.082266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.082421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.082447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.082569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.082594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.082762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.082790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.082907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.082933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.083074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.083099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.083274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.083302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.083446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.083473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.083584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.083610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.083785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.083811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.083959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.083995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.084120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.084145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.084276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.084316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.084434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.084461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.084621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.084648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.084771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.084797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.084951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.084985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.085143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.085169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.085295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.085323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.085470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.085496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.085623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.085649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.085758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.085784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.446 [2024-07-22 12:28:12.085939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.085965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:04.446 [2024-07-22 12:28:12.086087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.086114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f054c000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.446 [2024-07-22 12:28:12.086233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.086263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.446 [2024-07-22 12:28:12.086383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.086411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.086530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.086567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.086687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.086714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.086864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.086891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.087046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.087215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.087380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.087553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.087728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.087876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.087997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.088148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.088297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.088467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.088623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.088776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.088911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.088942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.089088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.089114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.446 [2024-07-22 12:28:12.089285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.446 [2024-07-22 12:28:12.089311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.446 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.089438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.447 [2024-07-22 12:28:12.089465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.089597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.447 [2024-07-22 12:28:12.089629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.089785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.447 [2024-07-22 12:28:12.089810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0544000b90 with addr=10.0.0.2, port=4420 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.089839] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.447 [2024-07-22 12:28:12.092350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.092497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.092524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.092541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.092555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.092591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.447 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:04.447 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.447 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:04.447 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.447 12:28:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1146440 00:33:04.447 [2024-07-22 12:28:12.102233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.102411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.102440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.102458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.102488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.102519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.112289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.112408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.112434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.112454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.112468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.112501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.122203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.122329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.122356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.122371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.122384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.122416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.132320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.132452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.132480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.132497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.132510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.132553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.142261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.142393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.142421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.142436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.142449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.142480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.152296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.152417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.152444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.152460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.152474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.152504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.162269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.162391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.162417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.162432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.162445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.162477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.172274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.172394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.172420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.172435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.172449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.172479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.182311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.182426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.182454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.182469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.182482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.182514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.192337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.192458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.192485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.192500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.192514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.192544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.202409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.202529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.202560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.202576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.202591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.202629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.212428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.212554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.212581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.212596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.212610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.212650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.222454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.222569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.222593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.222608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.222632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.222677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.232487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.232604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.232636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.232652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.232666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.232696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.242528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.242700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.242727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.242742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.242755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.242792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.252532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.252653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.252681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.252696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.252710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.252742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.262581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.262712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.262740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.262755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.262769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.262800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.272590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.272717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.272743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.272758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.272771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.272801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.282650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.282782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.282808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.282823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.282837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.282867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.292745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.292890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.292922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.292938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.292952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.292983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.302695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.302814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.302840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.302855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.302868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.447 [2024-07-22 12:28:12.302899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.447 qpair failed and we were unable to recover it. 00:33:04.447 [2024-07-22 12:28:12.312731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.447 [2024-07-22 12:28:12.312852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.447 [2024-07-22 12:28:12.312878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.447 [2024-07-22 12:28:12.312893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.447 [2024-07-22 12:28:12.312906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.448 [2024-07-22 12:28:12.312936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.448 qpair failed and we were unable to recover it. 00:33:04.448 [2024-07-22 12:28:12.322743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.448 [2024-07-22 12:28:12.322870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.448 [2024-07-22 12:28:12.322896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.448 [2024-07-22 12:28:12.322911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.448 [2024-07-22 12:28:12.322931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.448 [2024-07-22 12:28:12.322962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.448 qpair failed and we were unable to recover it. 00:33:04.448 [2024-07-22 12:28:12.332772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.448 [2024-07-22 12:28:12.332903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.448 [2024-07-22 12:28:12.332929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.448 [2024-07-22 12:28:12.332954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.448 [2024-07-22 12:28:12.332972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.448 [2024-07-22 12:28:12.333003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.448 qpair failed and we were unable to recover it. 00:33:04.448 [2024-07-22 12:28:12.342802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.448 [2024-07-22 12:28:12.342912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.448 [2024-07-22 12:28:12.342936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.448 [2024-07-22 12:28:12.342950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.448 [2024-07-22 12:28:12.342964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.448 [2024-07-22 12:28:12.342995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.448 qpair failed and we were unable to recover it. 00:33:04.448 [2024-07-22 12:28:12.352867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.448 [2024-07-22 12:28:12.352996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.448 [2024-07-22 12:28:12.353031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.448 [2024-07-22 12:28:12.353049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.448 [2024-07-22 12:28:12.353062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.448 [2024-07-22 12:28:12.353093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.448 qpair failed and we were unable to recover it. 00:33:04.448 [2024-07-22 12:28:12.362857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.448 [2024-07-22 12:28:12.363040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.448 [2024-07-22 12:28:12.363067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.448 [2024-07-22 12:28:12.363083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.448 [2024-07-22 12:28:12.363096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.448 [2024-07-22 12:28:12.363127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.448 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.372887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.373010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.373037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.373053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.373068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.373098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.382995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.383117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.383143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.383173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.383186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.383217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.393039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.393157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.393182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.393197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.393210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.393242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.402945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.403066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.403091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.403105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.403119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.403151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.412960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.413077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.413103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.413118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.413131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.413162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.423038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.423153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.423178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.423193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.423215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.423247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.433023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.433143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.433171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.433186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.433200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.433230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.443082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.443202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.443237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.443253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.443266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.443312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.453089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.709 [2024-07-22 12:28:12.453256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.709 [2024-07-22 12:28:12.453282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.709 [2024-07-22 12:28:12.453298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.709 [2024-07-22 12:28:12.453312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.709 [2024-07-22 12:28:12.453342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.709 qpair failed and we were unable to recover it. 00:33:04.709 [2024-07-22 12:28:12.463138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.463265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.463292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.463312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.463326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.463368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.473149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.473264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.473290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.473305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.473319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.473349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.483173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.483306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.483333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.483348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.483362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.483392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.493286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.493400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.493425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.493440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.493454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.493496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.503225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.503386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.503413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.503428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.503442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.503472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.513256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.513371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.513397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.513417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.513432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.513463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.523294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.523419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.523444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.523459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.523473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.523504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.533371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.533520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.533547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.533563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.533577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.533607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.543343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.543467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.543494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.543514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.543527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.543559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.553387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.553505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.553531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.553546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.553559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.553591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.563482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.563625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.563653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.563669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.563682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.563713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.573428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.573549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.573575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.573590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.573603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.573642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.583457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.583585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.583619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.583637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.583651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.583682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.593469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.593589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.593623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.593641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.710 [2024-07-22 12:28:12.593663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.710 [2024-07-22 12:28:12.593693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.710 qpair failed and we were unable to recover it. 00:33:04.710 [2024-07-22 12:28:12.603512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.710 [2024-07-22 12:28:12.603660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.710 [2024-07-22 12:28:12.603695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.710 [2024-07-22 12:28:12.603714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.711 [2024-07-22 12:28:12.603728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.711 [2024-07-22 12:28:12.603759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.711 qpair failed and we were unable to recover it. 00:33:04.711 [2024-07-22 12:28:12.613647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.711 [2024-07-22 12:28:12.613770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.711 [2024-07-22 12:28:12.613797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.711 [2024-07-22 12:28:12.613813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.711 [2024-07-22 12:28:12.613827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.711 [2024-07-22 12:28:12.613871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.711 qpair failed and we were unable to recover it. 00:33:04.711 [2024-07-22 12:28:12.623646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.711 [2024-07-22 12:28:12.623760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.711 [2024-07-22 12:28:12.623785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.711 [2024-07-22 12:28:12.623801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.711 [2024-07-22 12:28:12.623813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.711 [2024-07-22 12:28:12.623856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.711 qpair failed and we were unable to recover it. 00:33:04.711 [2024-07-22 12:28:12.633570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.711 [2024-07-22 12:28:12.633704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.711 [2024-07-22 12:28:12.633732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.711 [2024-07-22 12:28:12.633748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.711 [2024-07-22 12:28:12.633761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.711 [2024-07-22 12:28:12.633792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.711 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.643634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.643758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.643785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.643803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.643818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.643854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.653640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.653772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.653799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.653814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.653828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.653871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.663700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.663827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.663854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.663870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.663883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.663913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.673706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.673824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.673859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.673874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.673887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.673918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.683737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.683877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.683903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.683918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.683932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.683962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.693754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.693883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.693915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.693931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.693945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.693988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.703776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.703922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.703949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.703964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.703977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.704008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.713816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.713943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.713968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.713983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.713996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.714025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.723881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.724014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.724041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.724056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.724070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.724101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.733885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.734005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.734032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.734048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.734065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.734097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.744003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.744117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.744142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.744157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.744170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.744213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.753921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.754038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.754073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.754089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.754102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.754132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.763950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.764070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.764095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.764109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.973 [2024-07-22 12:28:12.764122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.973 [2024-07-22 12:28:12.764153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.973 qpair failed and we were unable to recover it. 00:33:04.973 [2024-07-22 12:28:12.773973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.973 [2024-07-22 12:28:12.774096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.973 [2024-07-22 12:28:12.774123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.973 [2024-07-22 12:28:12.774142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.774156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.774187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.784056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.784181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.784208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.784223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.784237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.784268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.794027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.794142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.794167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.794182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.794195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.794226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.804089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.804208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.804233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.804247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.804261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.804291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.814089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.814206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.814232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.814247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.814260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.814290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.824132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.824251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.824276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.824290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.824309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.824341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.834171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.834294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.834331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.834346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.834359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.834389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.844190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.844313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.844338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.844352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.844366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.844396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.854187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.854300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.854326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.854341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.854355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.854386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.864222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.864336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.864361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.864376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.864389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.864420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.874360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.874477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.874502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.874518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.874531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.874561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.884482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.884669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.884697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.884713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.884727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.884757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:04.974 [2024-07-22 12:28:12.894380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.974 [2024-07-22 12:28:12.894498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.974 [2024-07-22 12:28:12.894523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.974 [2024-07-22 12:28:12.894538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.974 [2024-07-22 12:28:12.894551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:04.974 [2024-07-22 12:28:12.894582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.974 qpair failed and we were unable to recover it. 00:33:05.243 [2024-07-22 12:28:12.904414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.243 [2024-07-22 12:28:12.904589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.243 [2024-07-22 12:28:12.904626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.243 [2024-07-22 12:28:12.904646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.243 [2024-07-22 12:28:12.904660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.243 [2024-07-22 12:28:12.904692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.243 qpair failed and we were unable to recover it. 00:33:05.243 [2024-07-22 12:28:12.914404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.243 [2024-07-22 12:28:12.914563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.243 [2024-07-22 12:28:12.914591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.243 [2024-07-22 12:28:12.914619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.243 [2024-07-22 12:28:12.914636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.243 [2024-07-22 12:28:12.914667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.243 qpair failed and we were unable to recover it. 00:33:05.243 [2024-07-22 12:28:12.924402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.243 [2024-07-22 12:28:12.924531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.243 [2024-07-22 12:28:12.924557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.243 [2024-07-22 12:28:12.924572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.243 [2024-07-22 12:28:12.924585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.243 [2024-07-22 12:28:12.924624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.243 qpair failed and we were unable to recover it. 00:33:05.243 [2024-07-22 12:28:12.934413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.243 [2024-07-22 12:28:12.934533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.243 [2024-07-22 12:28:12.934559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.243 [2024-07-22 12:28:12.934574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.243 [2024-07-22 12:28:12.934587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.243 [2024-07-22 12:28:12.934626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.243 qpair failed and we were unable to recover it. 00:33:05.243 [2024-07-22 12:28:12.944509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.243 [2024-07-22 12:28:12.944689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.243 [2024-07-22 12:28:12.944717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.243 [2024-07-22 12:28:12.944733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.243 [2024-07-22 12:28:12.944748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.243 [2024-07-22 12:28:12.944781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.243 qpair failed and we were unable to recover it. 00:33:05.243 [2024-07-22 12:28:12.954486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.243 [2024-07-22 12:28:12.954607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.243 [2024-07-22 12:28:12.954640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.243 [2024-07-22 12:28:12.954655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.243 [2024-07-22 12:28:12.954668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.243 [2024-07-22 12:28:12.954700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:12.964519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:12.964639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:12.964665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:12.964679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:12.964692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:12.964723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:12.974592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:12.974725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:12.974755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:12.974772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:12.974787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:12.974819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:12.984559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:12.984684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:12.984711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:12.984726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:12.984741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:12.984771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:12.994635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:12.994817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:12.994843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:12.994859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:12.994872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:12.994903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.004643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.004767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.004797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.004813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.004827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.004858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.014688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.014810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.014835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.014850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.014863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.014894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.024698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.024818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.024843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.024858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.024871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.024903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.034732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.034902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.034929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.034944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.034958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.034989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.044749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.044872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.044898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.044913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.044926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.044962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.054785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.054924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.054952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.054968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.054986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.055017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.064855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.064987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.065014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.065029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.065042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.065073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.074858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.074995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.075022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.075037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.075050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.075082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.084902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.085026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.085052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.085067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.085081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.085112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.094890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.095011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.095042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.095058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.095073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.095104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.104925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.105043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.105070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.105089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.105105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.244 [2024-07-22 12:28:13.105137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.244 qpair failed and we were unable to recover it. 00:33:05.244 [2024-07-22 12:28:13.114950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.244 [2024-07-22 12:28:13.115065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.244 [2024-07-22 12:28:13.115091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.244 [2024-07-22 12:28:13.115106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.244 [2024-07-22 12:28:13.115119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.245 [2024-07-22 12:28:13.115151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.245 qpair failed and we were unable to recover it. 00:33:05.245 [2024-07-22 12:28:13.124993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.245 [2024-07-22 12:28:13.125120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.245 [2024-07-22 12:28:13.125145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.245 [2024-07-22 12:28:13.125161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.245 [2024-07-22 12:28:13.125175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.245 [2024-07-22 12:28:13.125205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.245 qpair failed and we were unable to recover it. 00:33:05.245 [2024-07-22 12:28:13.135017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.245 [2024-07-22 12:28:13.135145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.245 [2024-07-22 12:28:13.135171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.245 [2024-07-22 12:28:13.135186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.245 [2024-07-22 12:28:13.135200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.245 [2024-07-22 12:28:13.135252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.245 qpair failed and we were unable to recover it. 00:33:05.245 [2024-07-22 12:28:13.145032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.245 [2024-07-22 12:28:13.145144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.245 [2024-07-22 12:28:13.145171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.245 [2024-07-22 12:28:13.145185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.245 [2024-07-22 12:28:13.145200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.245 [2024-07-22 12:28:13.145231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.245 qpair failed and we were unable to recover it. 00:33:05.245 [2024-07-22 12:28:13.155097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.245 [2024-07-22 12:28:13.155213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.245 [2024-07-22 12:28:13.155239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.245 [2024-07-22 12:28:13.155255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.245 [2024-07-22 12:28:13.155269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.245 [2024-07-22 12:28:13.155299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.245 qpair failed and we were unable to recover it. 00:33:05.245 [2024-07-22 12:28:13.165095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.245 [2024-07-22 12:28:13.165215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.245 [2024-07-22 12:28:13.165241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.245 [2024-07-22 12:28:13.165256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.245 [2024-07-22 12:28:13.165271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.245 [2024-07-22 12:28:13.165302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.245 qpair failed and we were unable to recover it. 00:33:05.507 [2024-07-22 12:28:13.175151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.507 [2024-07-22 12:28:13.175269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.507 [2024-07-22 12:28:13.175295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.507 [2024-07-22 12:28:13.175310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.507 [2024-07-22 12:28:13.175324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.507 [2024-07-22 12:28:13.175354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.507 qpair failed and we were unable to recover it. 00:33:05.507 [2024-07-22 12:28:13.185168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.507 [2024-07-22 12:28:13.185301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.507 [2024-07-22 12:28:13.185327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.507 [2024-07-22 12:28:13.185341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.507 [2024-07-22 12:28:13.185355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.507 [2024-07-22 12:28:13.185386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.507 qpair failed and we were unable to recover it. 00:33:05.507 [2024-07-22 12:28:13.195178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.195293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.195319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.195334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.195349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.195378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.205238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.205358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.205384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.205398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.205411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.205443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.215249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.215405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.215430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.215445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.215458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.215488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.225257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.225373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.225399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.225413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.225432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.225464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.235311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.235433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.235459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.235474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.235488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.235518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.245316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.245433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.245459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.245474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.245487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.245518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.255373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.255491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.255518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.255533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.255546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.255576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.265389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.265505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.265530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.265545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.265558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.265588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.275496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.275621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.275648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.275663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.275676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.275707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.285454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.285580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.285607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.285629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.285643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.285675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.295561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.295704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.295731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.295746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.295759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.295802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.305494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.305609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.305641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.305656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.305669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.305701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.315521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.315641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.315668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.315688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.315703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.315734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.325563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.325690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.325717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.508 [2024-07-22 12:28:13.325732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.508 [2024-07-22 12:28:13.325745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.508 [2024-07-22 12:28:13.325777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.508 qpair failed and we were unable to recover it. 00:33:05.508 [2024-07-22 12:28:13.335671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.508 [2024-07-22 12:28:13.335795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.508 [2024-07-22 12:28:13.335822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.335838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.335852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.335895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.345652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.345772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.345799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.345813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.345827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.345858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.355645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.355775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.355801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.355816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.355830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.355860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.365701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.365828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.365856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.365873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.365889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.365920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.375814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.375945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.375971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.375986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.376001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.376031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.385722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.385841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.385867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.385882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.385897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.385930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.395779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.395966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.395992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.396007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.396021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.396051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.405798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.405952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.405977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.405998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.406013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.406044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.415847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.415974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.416000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.416014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.416028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.416058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.425862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.425983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.426009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.426024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.426038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.426068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.509 [2024-07-22 12:28:13.435885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.509 [2024-07-22 12:28:13.436036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.509 [2024-07-22 12:28:13.436063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.509 [2024-07-22 12:28:13.436078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.509 [2024-07-22 12:28:13.436092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.509 [2024-07-22 12:28:13.436122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.509 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-22 12:28:13.445909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.767 [2024-07-22 12:28:13.446052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.767 [2024-07-22 12:28:13.446078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.767 [2024-07-22 12:28:13.446093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.767 [2024-07-22 12:28:13.446107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.767 [2024-07-22 12:28:13.446138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-22 12:28:13.455963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.456092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.456118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.456132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.456146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.456177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.465953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.466083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.466110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.466125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.466139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.466170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.476000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.476124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.476150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.476164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.476178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.476210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.486115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.486240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.486266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.486281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.486295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.486325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.496043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.496167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.496198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.496213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.496227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.496258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.506096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.506247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.506273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.506287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.506301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.506331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.516162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.516297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.516324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.516338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.516352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.516395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.526184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.526320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.526346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.526360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.526374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.526404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.536214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.536380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.536406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.536421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.536435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.536487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.546201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.546322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.546347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.546362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.546375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.546406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.556215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.556336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.556361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.556376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.556389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.556420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.566304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.566439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.566464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.566479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.566493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.566522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.576287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.576436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.576462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.576478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.576491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.576536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.586355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.586475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.768 [2024-07-22 12:28:13.586506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.768 [2024-07-22 12:28:13.586522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.768 [2024-07-22 12:28:13.586536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.768 [2024-07-22 12:28:13.586567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-22 12:28:13.596335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.768 [2024-07-22 12:28:13.596459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.596487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.596502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.596516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.596548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.606409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.606578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.606605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.606628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.606643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.606674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.616427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.616563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.616588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.616603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.616626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.616659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.626438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.626564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.626590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.626605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.626632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.626666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.636463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.636595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.636631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.636647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.636661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.636692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.646532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.646693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.646719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.646734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.646748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.646777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.656521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.656652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.656677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.656692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.656706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.656737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.666540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.666660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.666686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.666701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.666715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.666746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.676600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.676780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.676806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.676820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.676833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.676863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.686657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.686821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.686847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.686862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.686876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.686907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-22 12:28:13.696637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.769 [2024-07-22 12:28:13.696768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.769 [2024-07-22 12:28:13.696794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.769 [2024-07-22 12:28:13.696812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.769 [2024-07-22 12:28:13.696825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:05.769 [2024-07-22 12:28:13.696855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.769 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.706672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.706789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.706815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.706830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.706844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.706875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.716731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.716857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.716882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.716901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.716915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.716945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.726736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.726869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.726895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.726913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.726928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.726959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.736739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.736856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.736883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.736898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.736912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.736942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.746780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.746899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.746927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.746946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.746961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.746993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.756793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.756919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.756946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.756961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.756975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.757005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.766917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.767043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.767069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.028 [2024-07-22 12:28:13.767084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.028 [2024-07-22 12:28:13.767098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.028 [2024-07-22 12:28:13.767140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.028 qpair failed and we were unable to recover it. 00:33:06.028 [2024-07-22 12:28:13.776855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.028 [2024-07-22 12:28:13.776998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.028 [2024-07-22 12:28:13.777025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.777040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.777053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.777084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.786860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.786977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.787004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.787019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.787033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.787064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.796894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.797016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.797042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.797057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.797071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.797113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.807036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.807191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.807217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.807240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.807255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.807300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.816966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.817093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.817120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.817134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.817148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.817179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.826984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.827118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.827143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.827158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.827172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.827202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.837025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.837148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.837175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.837190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.837203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.837233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.847039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.847168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.847194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.847209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.847223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.847260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.857090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.857212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.857238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.857252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.857266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.857296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.867113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.867239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.867264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.867279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.867294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.867323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.877168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.877338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.877364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.877378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.877392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.877422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.887186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.887340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.887366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.887381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.887395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.887440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.897178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.897338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.897368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.897384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.897399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.897429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.907292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.907420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.907445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.907460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.907473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.907504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.029 qpair failed and we were unable to recover it. 00:33:06.029 [2024-07-22 12:28:13.917230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.029 [2024-07-22 12:28:13.917350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.029 [2024-07-22 12:28:13.917376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.029 [2024-07-22 12:28:13.917391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.029 [2024-07-22 12:28:13.917405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.029 [2024-07-22 12:28:13.917434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.030 qpair failed and we were unable to recover it. 00:33:06.030 [2024-07-22 12:28:13.927371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.030 [2024-07-22 12:28:13.927548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.030 [2024-07-22 12:28:13.927573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.030 [2024-07-22 12:28:13.927587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.030 [2024-07-22 12:28:13.927602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.030 [2024-07-22 12:28:13.927640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.030 qpair failed and we were unable to recover it. 00:33:06.030 [2024-07-22 12:28:13.937299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.030 [2024-07-22 12:28:13.937469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.030 [2024-07-22 12:28:13.937496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.030 [2024-07-22 12:28:13.937526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.030 [2024-07-22 12:28:13.937540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.030 [2024-07-22 12:28:13.937591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.030 qpair failed and we were unable to recover it. 00:33:06.030 [2024-07-22 12:28:13.947333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.030 [2024-07-22 12:28:13.947500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.030 [2024-07-22 12:28:13.947526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.030 [2024-07-22 12:28:13.947541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.030 [2024-07-22 12:28:13.947569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.030 [2024-07-22 12:28:13.947599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.030 qpair failed and we were unable to recover it. 00:33:06.030 [2024-07-22 12:28:13.957402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.030 [2024-07-22 12:28:13.957565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.030 [2024-07-22 12:28:13.957591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.030 [2024-07-22 12:28:13.957606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.030 [2024-07-22 12:28:13.957628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.030 [2024-07-22 12:28:13.957660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.030 qpair failed and we were unable to recover it. 00:33:06.288 [2024-07-22 12:28:13.967445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.288 [2024-07-22 12:28:13.967605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.288 [2024-07-22 12:28:13.967638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.288 [2024-07-22 12:28:13.967654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.288 [2024-07-22 12:28:13.967668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.288 [2024-07-22 12:28:13.967711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.288 qpair failed and we were unable to recover it. 00:33:06.288 [2024-07-22 12:28:13.977401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.288 [2024-07-22 12:28:13.977546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.288 [2024-07-22 12:28:13.977572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.288 [2024-07-22 12:28:13.977587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.288 [2024-07-22 12:28:13.977601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.288 [2024-07-22 12:28:13.977640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.288 qpair failed and we were unable to recover it. 00:33:06.288 [2024-07-22 12:28:13.987440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.288 [2024-07-22 12:28:13.987560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.288 [2024-07-22 12:28:13.987592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.288 [2024-07-22 12:28:13.987608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.288 [2024-07-22 12:28:13.987631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.288 [2024-07-22 12:28:13.987663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.288 qpair failed and we were unable to recover it. 00:33:06.288 [2024-07-22 12:28:13.997458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.288 [2024-07-22 12:28:13.997582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.288 [2024-07-22 12:28:13.997611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.288 [2024-07-22 12:28:13.997639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.288 [2024-07-22 12:28:13.997654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.288 [2024-07-22 12:28:13.997686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.288 qpair failed and we were unable to recover it. 00:33:06.288 [2024-07-22 12:28:14.007503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.007630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.007656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.007670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.007684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.007714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.017592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.017726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.017752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.017767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.017781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.017823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.027651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.027771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.027797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.027812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.027830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.027875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.037576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.037753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.037781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.037797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.037811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.037844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.047701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.047830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.047857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.047872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.047885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.047921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.057650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.057768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.057794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.057809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.057822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.057852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.067683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.067815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.067841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.067857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.067870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.067901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.077827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.077947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.077971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.077986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.077999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.078030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.087763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.087907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.087933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.087948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.087962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.088007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.097764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.097883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.097919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.097934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.097948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.097978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.107833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.107959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.107985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.108001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.108014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.108044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.117801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.117919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.117943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.117958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.117976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.118008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.127938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.128062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.128092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.128107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.128121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.128171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.137870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.137999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.138026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.289 [2024-07-22 12:28:14.138041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.289 [2024-07-22 12:28:14.138055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.289 [2024-07-22 12:28:14.138085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.289 qpair failed and we were unable to recover it. 00:33:06.289 [2024-07-22 12:28:14.147932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.289 [2024-07-22 12:28:14.148063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.289 [2024-07-22 12:28:14.148089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.148104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.148117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.148147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.157955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.290 [2024-07-22 12:28:14.158090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.290 [2024-07-22 12:28:14.158116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.158131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.158144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.158174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.167992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.290 [2024-07-22 12:28:14.168148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.290 [2024-07-22 12:28:14.168174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.168189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.168202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.168233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.178071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.290 [2024-07-22 12:28:14.178189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.290 [2024-07-22 12:28:14.178214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.178234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.178256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.178291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.188009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.290 [2024-07-22 12:28:14.188125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.290 [2024-07-22 12:28:14.188150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.188166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.188179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.188211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.198076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.290 [2024-07-22 12:28:14.198196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.290 [2024-07-22 12:28:14.198221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.198235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.198249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.198280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.208165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.290 [2024-07-22 12:28:14.208317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.290 [2024-07-22 12:28:14.208344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.290 [2024-07-22 12:28:14.208364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.290 [2024-07-22 12:28:14.208393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.290 [2024-07-22 12:28:14.208425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.290 qpair failed and we were unable to recover it. 00:33:06.290 [2024-07-22 12:28:14.218093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.548 [2024-07-22 12:28:14.218214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.548 [2024-07-22 12:28:14.218242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.548 [2024-07-22 12:28:14.218261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.548 [2024-07-22 12:28:14.218276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.548 [2024-07-22 12:28:14.218308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.548 qpair failed and we were unable to recover it. 00:33:06.548 [2024-07-22 12:28:14.228153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.548 [2024-07-22 12:28:14.228270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.548 [2024-07-22 12:28:14.228300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.548 [2024-07-22 12:28:14.228328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.548 [2024-07-22 12:28:14.228345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.548 [2024-07-22 12:28:14.228378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.548 qpair failed and we were unable to recover it. 00:33:06.548 [2024-07-22 12:28:14.238148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.548 [2024-07-22 12:28:14.238269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.548 [2024-07-22 12:28:14.238296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.548 [2024-07-22 12:28:14.238311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.548 [2024-07-22 12:28:14.238324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.548 [2024-07-22 12:28:14.238355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.548 qpair failed and we were unable to recover it. 00:33:06.548 [2024-07-22 12:28:14.248221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.548 [2024-07-22 12:28:14.248344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.548 [2024-07-22 12:28:14.248370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.548 [2024-07-22 12:28:14.248385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.548 [2024-07-22 12:28:14.248398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.548 [2024-07-22 12:28:14.248428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.548 qpair failed and we were unable to recover it. 00:33:06.548 [2024-07-22 12:28:14.258296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.548 [2024-07-22 12:28:14.258412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.548 [2024-07-22 12:28:14.258439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.548 [2024-07-22 12:28:14.258454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.548 [2024-07-22 12:28:14.258466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.548 [2024-07-22 12:28:14.258508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.548 qpair failed and we were unable to recover it. 00:33:06.548 [2024-07-22 12:28:14.268271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.548 [2024-07-22 12:28:14.268392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.548 [2024-07-22 12:28:14.268419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.268434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.268447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.268478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.278278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.278414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.278441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.278456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.278470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.278501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.288331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.288449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.288475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.288490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.288503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.288533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.298354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.298475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.298515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.298532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.298545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.298577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.308355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.308493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.308520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.308535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.308553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.308600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.318390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.318510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.318536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.318551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.318564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.318595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.328427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.328558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.328586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.328604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.328626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.328660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.338452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.338573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.338600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.338625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.338641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.338678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.348467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.348584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.348610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.348633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.348646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.348677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.358527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.358670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.358700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.358715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.358728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.358758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.368648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.368831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.368858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.368873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.368885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.368927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.378561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.378752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.378779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.378794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.378808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.378839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.388583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.388713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.388755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.388772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.388785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.388816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.398711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.398829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.398856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.398878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.398892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.549 [2024-07-22 12:28:14.398937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.549 qpair failed and we were unable to recover it. 00:33:06.549 [2024-07-22 12:28:14.408643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.549 [2024-07-22 12:28:14.408772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.549 [2024-07-22 12:28:14.408798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.549 [2024-07-22 12:28:14.408813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.549 [2024-07-22 12:28:14.408827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.408857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.550 [2024-07-22 12:28:14.418672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.550 [2024-07-22 12:28:14.418793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.550 [2024-07-22 12:28:14.418818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.550 [2024-07-22 12:28:14.418832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.550 [2024-07-22 12:28:14.418845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.418876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.550 [2024-07-22 12:28:14.428695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.550 [2024-07-22 12:28:14.428852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.550 [2024-07-22 12:28:14.428880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.550 [2024-07-22 12:28:14.428896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.550 [2024-07-22 12:28:14.428914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.428945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.550 [2024-07-22 12:28:14.438711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.550 [2024-07-22 12:28:14.438844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.550 [2024-07-22 12:28:14.438870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.550 [2024-07-22 12:28:14.438885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.550 [2024-07-22 12:28:14.438898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.438928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.550 [2024-07-22 12:28:14.448749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.550 [2024-07-22 12:28:14.448869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.550 [2024-07-22 12:28:14.448894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.550 [2024-07-22 12:28:14.448909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.550 [2024-07-22 12:28:14.448922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.448953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.550 [2024-07-22 12:28:14.458781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.550 [2024-07-22 12:28:14.458926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.550 [2024-07-22 12:28:14.458953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.550 [2024-07-22 12:28:14.458968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.550 [2024-07-22 12:28:14.458981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.459026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.550 [2024-07-22 12:28:14.468816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.550 [2024-07-22 12:28:14.468932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.550 [2024-07-22 12:28:14.468958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.550 [2024-07-22 12:28:14.468973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.550 [2024-07-22 12:28:14.468988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.550 [2024-07-22 12:28:14.469019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.550 qpair failed and we were unable to recover it. 00:33:06.810 [2024-07-22 12:28:14.478828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.810 [2024-07-22 12:28:14.478946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.810 [2024-07-22 12:28:14.478973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.810 [2024-07-22 12:28:14.478988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.810 [2024-07-22 12:28:14.479002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.810 [2024-07-22 12:28:14.479033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.810 qpair failed and we were unable to recover it. 00:33:06.810 [2024-07-22 12:28:14.488896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.810 [2024-07-22 12:28:14.489018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.810 [2024-07-22 12:28:14.489043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.810 [2024-07-22 12:28:14.489058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.810 [2024-07-22 12:28:14.489071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.810 [2024-07-22 12:28:14.489103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.810 qpair failed and we were unable to recover it. 00:33:06.810 [2024-07-22 12:28:14.498919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.810 [2024-07-22 12:28:14.499040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.810 [2024-07-22 12:28:14.499075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.810 [2024-07-22 12:28:14.499091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.810 [2024-07-22 12:28:14.499104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.810 [2024-07-22 12:28:14.499134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.810 qpair failed and we were unable to recover it. 00:33:06.810 [2024-07-22 12:28:14.508958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.810 [2024-07-22 12:28:14.509076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.810 [2024-07-22 12:28:14.509100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.810 [2024-07-22 12:28:14.509114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.810 [2024-07-22 12:28:14.509127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.810 [2024-07-22 12:28:14.509158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.810 qpair failed and we were unable to recover it. 00:33:06.810 [2024-07-22 12:28:14.518985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.810 [2024-07-22 12:28:14.519102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.810 [2024-07-22 12:28:14.519126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.810 [2024-07-22 12:28:14.519141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.810 [2024-07-22 12:28:14.519160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.810 [2024-07-22 12:28:14.519192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.810 qpair failed and we were unable to recover it. 00:33:06.810 [2024-07-22 12:28:14.529031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.810 [2024-07-22 12:28:14.529189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.810 [2024-07-22 12:28:14.529217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.810 [2024-07-22 12:28:14.529233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.529246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.529292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.539068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.539188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.539215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.539231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.539244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.539276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.549057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.549179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.549205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.549219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.549232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.549263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.559083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.559213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.559240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.559255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.559270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.559300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.569115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.569265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.569290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.569305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.569318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.569349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.579172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.579301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.579328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.579342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.579356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.579387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.589191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.589326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.589352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.589367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.589381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.589411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.599232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.599343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.599369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.599384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.599397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.599428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.609251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.609371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.609395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.609415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.609429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.609459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.619296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.619463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.619489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.619505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.619533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.619563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.629289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.629417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.629444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.629460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.629473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.629504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.639347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.639478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.639505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.639520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.639533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.639563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.649343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.649490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.649516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.649532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.649546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.649575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.659391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.811 [2024-07-22 12:28:14.659510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.811 [2024-07-22 12:28:14.659535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.811 [2024-07-22 12:28:14.659549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.811 [2024-07-22 12:28:14.659563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.811 [2024-07-22 12:28:14.659594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.811 qpair failed and we were unable to recover it. 00:33:06.811 [2024-07-22 12:28:14.669424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.669556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.669583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.669597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.669611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.669650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.679413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.679575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.679603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.679626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.679642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.679672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.689562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.689699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.689726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.689742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.689756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.689799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.699482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.699607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.699650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.699668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.699682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.699713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.709527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.709657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.709682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.709698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.709711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.709742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.719560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.719690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.719715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.719730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.719742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.719772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.729639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.812 [2024-07-22 12:28:14.729766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.812 [2024-07-22 12:28:14.729793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.812 [2024-07-22 12:28:14.729810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.812 [2024-07-22 12:28:14.729823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:06.812 [2024-07-22 12:28:14.729853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:06.812 qpair failed and we were unable to recover it. 00:33:06.812 [2024-07-22 12:28:14.739651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.739771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.739800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.739817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.073 [2024-07-22 12:28:14.739832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.073 [2024-07-22 12:28:14.739870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-22 12:28:14.749659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.749797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.749827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.749845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.073 [2024-07-22 12:28:14.749858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.073 [2024-07-22 12:28:14.749890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-22 12:28:14.759681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.759802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.759829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.759845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.073 [2024-07-22 12:28:14.759859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.073 [2024-07-22 12:28:14.759891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-22 12:28:14.769858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.770022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.770048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.770063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.073 [2024-07-22 12:28:14.770076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.073 [2024-07-22 12:28:14.770132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-22 12:28:14.779734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.779854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.779880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.779895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.073 [2024-07-22 12:28:14.779908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.073 [2024-07-22 12:28:14.779939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-22 12:28:14.789752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.789906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.789937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.789953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.073 [2024-07-22 12:28:14.789966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.073 [2024-07-22 12:28:14.789998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-22 12:28:14.799900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.073 [2024-07-22 12:28:14.800035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.073 [2024-07-22 12:28:14.800062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.073 [2024-07-22 12:28:14.800077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.800091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.800136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.809849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.809989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.810016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.810031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.810044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.810089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.819853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.819999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.820025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.820040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.820053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.820097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.829925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.830044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.830071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.830085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.830098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.830135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.839938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.840111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.840138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.840154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.840167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.840197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.849919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.850044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.850079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.850094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.850107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.850137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.859973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.860092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.860117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.860132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.860147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.860177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.869971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.870118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.870145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.870161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.870174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.870207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.880009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.880170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.880198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.880213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.880227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.880257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.890084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.890215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.890242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.890257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.890271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.890302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.900064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.900219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.900246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.900261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.900274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.900304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.910092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.910222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.910248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.910263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.910277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.910306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.920157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.920281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.920309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.920327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.920347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.920380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.930190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.930315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.930342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.930357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.930370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.930416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-22 12:28:14.940300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.074 [2024-07-22 12:28:14.940434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.074 [2024-07-22 12:28:14.940462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.074 [2024-07-22 12:28:14.940477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.074 [2024-07-22 12:28:14.940491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.074 [2024-07-22 12:28:14.940534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-22 12:28:14.950241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.075 [2024-07-22 12:28:14.950393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.075 [2024-07-22 12:28:14.950420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.075 [2024-07-22 12:28:14.950436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.075 [2024-07-22 12:28:14.950449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.075 [2024-07-22 12:28:14.950480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-22 12:28:14.960249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.075 [2024-07-22 12:28:14.960374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.075 [2024-07-22 12:28:14.960401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.075 [2024-07-22 12:28:14.960416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.075 [2024-07-22 12:28:14.960429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.075 [2024-07-22 12:28:14.960459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-22 12:28:14.970319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.075 [2024-07-22 12:28:14.970443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.075 [2024-07-22 12:28:14.970469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.075 [2024-07-22 12:28:14.970484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.075 [2024-07-22 12:28:14.970497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.075 [2024-07-22 12:28:14.970528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-22 12:28:14.980324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.075 [2024-07-22 12:28:14.980456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.075 [2024-07-22 12:28:14.980484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.075 [2024-07-22 12:28:14.980499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.075 [2024-07-22 12:28:14.980513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.075 [2024-07-22 12:28:14.980543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-22 12:28:14.990305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.075 [2024-07-22 12:28:14.990422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.075 [2024-07-22 12:28:14.990449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.075 [2024-07-22 12:28:14.990464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.075 [2024-07-22 12:28:14.990477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.075 [2024-07-22 12:28:14.990507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-22 12:28:15.000361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.075 [2024-07-22 12:28:15.000480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.075 [2024-07-22 12:28:15.000505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.075 [2024-07-22 12:28:15.000519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.075 [2024-07-22 12:28:15.000533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.075 [2024-07-22 12:28:15.000563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.010390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.010509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.010534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.010554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.010567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.010598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.020417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.020558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.020586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.020602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.020623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.020668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.030443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.030565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.030591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.030606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.030626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.030659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.040474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.040603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.040637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.040653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.040668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.040698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.050529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.050658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.050684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.050700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.050713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.050744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.060519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.060655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.060681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.060696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.060709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.060740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.070538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.070660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.070686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.070700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.070713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.070744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.080578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.080711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.080738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.080753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.080765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.080797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.336 qpair failed and we were unable to recover it. 00:33:07.336 [2024-07-22 12:28:15.090645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.336 [2024-07-22 12:28:15.090794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.336 [2024-07-22 12:28:15.090820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.336 [2024-07-22 12:28:15.090835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.336 [2024-07-22 12:28:15.090848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.336 [2024-07-22 12:28:15.090892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.100701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.100823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.100849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.100871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.100886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.100917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.110680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.110808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.110834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.110849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.110863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.110905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.120707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.120834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.120860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.120874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.120886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.120917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.130764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.130893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.130920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.130935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.130948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.130980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.140784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.140905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.140933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.140948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.140961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.140993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.150799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.150917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.150943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.150958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.150973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.151003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.160808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.160920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.160946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.160961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.160974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.161005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.170845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.170965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.170990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.171005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.171018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.171048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.180883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.181010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.181037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.181051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.181064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.181095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.337 [2024-07-22 12:28:15.190894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.337 [2024-07-22 12:28:15.191016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.337 [2024-07-22 12:28:15.191049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.337 [2024-07-22 12:28:15.191065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.337 [2024-07-22 12:28:15.191080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.337 [2024-07-22 12:28:15.191114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.337 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.200942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.201063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.201089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.201104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.201117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.201149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.210949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.211067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.211093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.211108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.211121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.211152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.221058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.221175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.221201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.221216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.221229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.221273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.230995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.231110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.231137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.231152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.231164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.231204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.241007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.241172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.241199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.241214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.241228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.241259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.251056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.251184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.251209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.251225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.251237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.251268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.338 [2024-07-22 12:28:15.261065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.338 [2024-07-22 12:28:15.261183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.338 [2024-07-22 12:28:15.261209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.338 [2024-07-22 12:28:15.261224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.338 [2024-07-22 12:28:15.261237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.338 [2024-07-22 12:28:15.261268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.338 qpair failed and we were unable to recover it. 00:33:07.598 [2024-07-22 12:28:15.271120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.598 [2024-07-22 12:28:15.271255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.598 [2024-07-22 12:28:15.271282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.598 [2024-07-22 12:28:15.271297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.598 [2024-07-22 12:28:15.271314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.598 [2024-07-22 12:28:15.271345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.598 qpair failed and we were unable to recover it. 00:33:07.598 [2024-07-22 12:28:15.281161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.598 [2024-07-22 12:28:15.281284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.598 [2024-07-22 12:28:15.281315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.598 [2024-07-22 12:28:15.281331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.598 [2024-07-22 12:28:15.281344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.598 [2024-07-22 12:28:15.281376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.598 qpair failed and we were unable to recover it. 00:33:07.598 [2024-07-22 12:28:15.291177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.598 [2024-07-22 12:28:15.291301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.598 [2024-07-22 12:28:15.291328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.598 [2024-07-22 12:28:15.291342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.598 [2024-07-22 12:28:15.291358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.598 [2024-07-22 12:28:15.291388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.598 qpair failed and we were unable to recover it. 00:33:07.598 [2024-07-22 12:28:15.301269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.598 [2024-07-22 12:28:15.301388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.598 [2024-07-22 12:28:15.301414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.598 [2024-07-22 12:28:15.301430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.598 [2024-07-22 12:28:15.301447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.598 [2024-07-22 12:28:15.301491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.598 qpair failed and we were unable to recover it. 00:33:07.598 [2024-07-22 12:28:15.311232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.598 [2024-07-22 12:28:15.311392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.598 [2024-07-22 12:28:15.311419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.598 [2024-07-22 12:28:15.311434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.598 [2024-07-22 12:28:15.311463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.598 [2024-07-22 12:28:15.311494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.598 qpair failed and we were unable to recover it. 00:33:07.598 [2024-07-22 12:28:15.321335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.321451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.321477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.321492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.321512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.321555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.331300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.331465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.331492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.331522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.331535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.331593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.341326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.341460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.341487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.341502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.341516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.341562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.351337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.351456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.351482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.351496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.351510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.351541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.361382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.361508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.361534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.361548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.361561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.361593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.371407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.371542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.371568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.371583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.371598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.371653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.381415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.381542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.381568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.381584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.381598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.381638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.391444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.391560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.391587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.391601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.391623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.391657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.401472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.401593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.401631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.401651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.401665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.401697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.411522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.411662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.411689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.411712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.411727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.411759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.421542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.421678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.421704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.421719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.421732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.421764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.431561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.431733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.431761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.431776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.431789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.431821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.441582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.441721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.441747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.441761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.441774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.441806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.451610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.451740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.451766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.599 [2024-07-22 12:28:15.451781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.599 [2024-07-22 12:28:15.451795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.599 [2024-07-22 12:28:15.451826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.599 qpair failed and we were unable to recover it. 00:33:07.599 [2024-07-22 12:28:15.461620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.599 [2024-07-22 12:28:15.461740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.599 [2024-07-22 12:28:15.461766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.461781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.461794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.461825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.600 [2024-07-22 12:28:15.471676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.600 [2024-07-22 12:28:15.471790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.600 [2024-07-22 12:28:15.471816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.471831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.471845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.471876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.600 [2024-07-22 12:28:15.481694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.600 [2024-07-22 12:28:15.481818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.600 [2024-07-22 12:28:15.481844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.481859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.481874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.481904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.600 [2024-07-22 12:28:15.491728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.600 [2024-07-22 12:28:15.491850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.600 [2024-07-22 12:28:15.491876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.491891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.491906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.491936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.600 [2024-07-22 12:28:15.501770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.600 [2024-07-22 12:28:15.501893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.600 [2024-07-22 12:28:15.501919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.501939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.501953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.501984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.600 [2024-07-22 12:28:15.511802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.600 [2024-07-22 12:28:15.511930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.600 [2024-07-22 12:28:15.511956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.511970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.511983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.512014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.600 [2024-07-22 12:28:15.521788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.600 [2024-07-22 12:28:15.521900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.600 [2024-07-22 12:28:15.521925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.600 [2024-07-22 12:28:15.521940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.600 [2024-07-22 12:28:15.521953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.600 [2024-07-22 12:28:15.521984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.600 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.531829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.531953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.531980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.531995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.532009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.532040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.541864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.542007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.542034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.542049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.542062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.542108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.551898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.552017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.552044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.552059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.552076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.552106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.561919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.562038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.562065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.562080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.562093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.562123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.571973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.572105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.572131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.572145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.572158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.572215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.581979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.582096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.582123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.582138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.582152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.582195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.591989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.592104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.592135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.592150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.592165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.592196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.602021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.602139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.602165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.602179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.602191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.602222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.612101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.612225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.612251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.612266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.612279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.612326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.622129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.622257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.622283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.622298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.622311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.622369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.632130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.632253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.632280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.860 [2024-07-22 12:28:15.632295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.860 [2024-07-22 12:28:15.632309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.860 [2024-07-22 12:28:15.632346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.860 qpair failed and we were unable to recover it. 00:33:07.860 [2024-07-22 12:28:15.642135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.860 [2024-07-22 12:28:15.642256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.860 [2024-07-22 12:28:15.642282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.642297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.642311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.642340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.652194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.652320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.652345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.652360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.652374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.652417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.662208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.662328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.662354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.662369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.662382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.662414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.672268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.672399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.672425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.672440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.672454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.672485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.682280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.682404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.682436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.682455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.682469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.682499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.692318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.692446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.692473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.692492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.692506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.692563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.702338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.702511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.702539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.702571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.702587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.702654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.712370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.712489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.712516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.712530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.712544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.712587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.722408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.722547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.722573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.722591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.722610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.722655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.732444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.732574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.732607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.732631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.732645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.732676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.742439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.742570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.742596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.742611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.742633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.742664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.752502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.752630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.752657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.752672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.752685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.752716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.762514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.762651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.762678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.762693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.762710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.762742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.772540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.772679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.772705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.772720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.772734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.861 [2024-07-22 12:28:15.772764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.861 qpair failed and we were unable to recover it. 00:33:07.861 [2024-07-22 12:28:15.782542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.861 [2024-07-22 12:28:15.782679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.861 [2024-07-22 12:28:15.782706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.861 [2024-07-22 12:28:15.782721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.861 [2024-07-22 12:28:15.782734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:07.862 [2024-07-22 12:28:15.782765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:07.862 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-22 12:28:15.792580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.121 [2024-07-22 12:28:15.792720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.121 [2024-07-22 12:28:15.792745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.121 [2024-07-22 12:28:15.792760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.121 [2024-07-22 12:28:15.792773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.121 [2024-07-22 12:28:15.792804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-22 12:28:15.802603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.121 [2024-07-22 12:28:15.802730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.121 [2024-07-22 12:28:15.802756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.121 [2024-07-22 12:28:15.802771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.121 [2024-07-22 12:28:15.802784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.121 [2024-07-22 12:28:15.802815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-22 12:28:15.812645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.121 [2024-07-22 12:28:15.812770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.121 [2024-07-22 12:28:15.812796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.121 [2024-07-22 12:28:15.812810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.121 [2024-07-22 12:28:15.812829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.121 [2024-07-22 12:28:15.812860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-22 12:28:15.822709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.121 [2024-07-22 12:28:15.822844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.121 [2024-07-22 12:28:15.822869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.121 [2024-07-22 12:28:15.822884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.121 [2024-07-22 12:28:15.822898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.121 [2024-07-22 12:28:15.822928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-22 12:28:15.832705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.121 [2024-07-22 12:28:15.832826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.121 [2024-07-22 12:28:15.832853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.121 [2024-07-22 12:28:15.832869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.121 [2024-07-22 12:28:15.832883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.832913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.842724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.842843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.842869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.842884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.842898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.842928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.852747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.852874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.852899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.852913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.852927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.852957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.862787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.862909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.862934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.862949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.862962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.862993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.872801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.872917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.872943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.872958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.872971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.873001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.882826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.882944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.882970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.882985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.882998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.883028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.892906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.893035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.893061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.893075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.893089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.893134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.902893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.903023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.903049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.903070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.903085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.903117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.912965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.913093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.913118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.913133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.913146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.913176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.922974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.923108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.923133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.923148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.923161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.923192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.933011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.933144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.933170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.933185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.933199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.933230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.943016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.943144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.943170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.943184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.943198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.943228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.953107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.953249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.953274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.953288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.953303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.953333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.963082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.963214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.963240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.963255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.963268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.963298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-22 12:28:15.973111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.122 [2024-07-22 12:28:15.973277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.122 [2024-07-22 12:28:15.973304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.122 [2024-07-22 12:28:15.973318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.122 [2024-07-22 12:28:15.973332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.122 [2024-07-22 12:28:15.973363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:15.983159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:15.983339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:15.983366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:15.983398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:15.983415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:15.983446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:15.993187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:15.993314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:15.993348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:15.993364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:15.993378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:15.993409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:16.003209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:16.003337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:16.003363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:16.003378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:16.003392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:16.003422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:16.013248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:16.013388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:16.013413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:16.013428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:16.013442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:16.013472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:16.023290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:16.023415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:16.023441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:16.023456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:16.023470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:16.023513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:16.033249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:16.033368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:16.033394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:16.033409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:16.033423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:16.033460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-22 12:28:16.043285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.123 [2024-07-22 12:28:16.043406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.123 [2024-07-22 12:28:16.043432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.123 [2024-07-22 12:28:16.043447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.123 [2024-07-22 12:28:16.043461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.123 [2024-07-22 12:28:16.043491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.384 [2024-07-22 12:28:16.053338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.384 [2024-07-22 12:28:16.053460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.384 [2024-07-22 12:28:16.053486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.384 [2024-07-22 12:28:16.053500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.384 [2024-07-22 12:28:16.053514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.384 [2024-07-22 12:28:16.053571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.384 qpair failed and we were unable to recover it. 00:33:08.384 [2024-07-22 12:28:16.063358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.384 [2024-07-22 12:28:16.063492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.384 [2024-07-22 12:28:16.063517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.384 [2024-07-22 12:28:16.063532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.384 [2024-07-22 12:28:16.063545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.384 [2024-07-22 12:28:16.063575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.073397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.073515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.073541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.073556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.073569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.073600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.083423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.083541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.083572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.083588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.083602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.083640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.093448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.093631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.093658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.093673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.093687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.093717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.103451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.103571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.103597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.103617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.103633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.103664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.113519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.113657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.113687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.113704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.113718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.113750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.123517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.123648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.123674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.123690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.123709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.123741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.133564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.133701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.133728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.133743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.133757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.133788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.143571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.143702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.143728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.143743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.143757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.143787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.153638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.153759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.153786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.153800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.153813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.153844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.163657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.163828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.163855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.163874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.163890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.163921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.173694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.173834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.173861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.173875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.173889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.173919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.183727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.183865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.183892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.183911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.183924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.385 [2024-07-22 12:28:16.183970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.385 qpair failed and we were unable to recover it. 00:33:08.385 [2024-07-22 12:28:16.193738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.385 [2024-07-22 12:28:16.193856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.385 [2024-07-22 12:28:16.193882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.385 [2024-07-22 12:28:16.193898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.385 [2024-07-22 12:28:16.193911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.193942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.203759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.203875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.203901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.203927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.203940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.203970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.213790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.213936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.213963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.213978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.213996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.214028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.223861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.223983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.224010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.224026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.224040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.224082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.233843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.233962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.233989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.234004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.234017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.234048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.243899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.244017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.244042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.244057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.244070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.244101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.253969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.254122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.254148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.254163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.254191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.254222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.264010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.264153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.264181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.264200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.264215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.264274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.273984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.274101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.274126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.274140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.274153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.274186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.284022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.284144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.284179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.284194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.284208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.284239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.294059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.294180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.294205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.294219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.294232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.294263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.386 [2024-07-22 12:28:16.304099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.386 [2024-07-22 12:28:16.304221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.386 [2024-07-22 12:28:16.304248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.386 [2024-07-22 12:28:16.304268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.386 [2024-07-22 12:28:16.304282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.386 [2024-07-22 12:28:16.304314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.386 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.314096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.314239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.314266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.314281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.314294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.314325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.324135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.324255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.324279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.324294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.324308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.324339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.334199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.334320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.334346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.334361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.334374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.334405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.344181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.344301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.344336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.344351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.344365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.344395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.354269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.354429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.354456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.354471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.354498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.354529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.364235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.364356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.364380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.364395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.364408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.364439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.374304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.374424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.374449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.374464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.374477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.374519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.384313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.384433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.384458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.384473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.384486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.384517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.394348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.394466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.394497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.394514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.394527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.394569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.404375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.404507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.404534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.404549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.404563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.404593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.414422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.414550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.414577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.414592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.414605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.414658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.647 [2024-07-22 12:28:16.424426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.647 [2024-07-22 12:28:16.424561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.647 [2024-07-22 12:28:16.424587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.647 [2024-07-22 12:28:16.424602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.647 [2024-07-22 12:28:16.424622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.647 [2024-07-22 12:28:16.424654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.647 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.434449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.434571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.434599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.434622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.434637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.434676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.444452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.444566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.444591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.444606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.444627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.444659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.454510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.454652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.454677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.454692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.454704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.454736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.464506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.464633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.464658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.464673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.464687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.464718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.474541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.474658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.474683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.474698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.474711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.474741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.484567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.484738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.484771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.484788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.484802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.484833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.494646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.494784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.494810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.494825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.494838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.494869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.504650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.504794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.504819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.504834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.504848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.504878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.514757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.514889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.514916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.514931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.514959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.514989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.524704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.524822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.524849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.524864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.524881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.524917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.534763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.534892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.534919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.534934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.534947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.534990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.544765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.544938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.544964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.544993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.545008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.545038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.554791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.554906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.554933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.554948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.554961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.554993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.564822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.564939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.648 [2024-07-22 12:28:16.564970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.648 [2024-07-22 12:28:16.564984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.648 [2024-07-22 12:28:16.564997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.648 [2024-07-22 12:28:16.565029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.648 qpair failed and we were unable to recover it. 00:33:08.648 [2024-07-22 12:28:16.574847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.648 [2024-07-22 12:28:16.574973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.649 [2024-07-22 12:28:16.574999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.649 [2024-07-22 12:28:16.575014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.649 [2024-07-22 12:28:16.575029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.649 [2024-07-22 12:28:16.575059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.649 qpair failed and we were unable to recover it. 00:33:08.907 [2024-07-22 12:28:16.584875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.907 [2024-07-22 12:28:16.584993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.907 [2024-07-22 12:28:16.585031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.907 [2024-07-22 12:28:16.585046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.907 [2024-07-22 12:28:16.585061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.907 [2024-07-22 12:28:16.585092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.907 qpair failed and we were unable to recover it. 00:33:08.907 [2024-07-22 12:28:16.594937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.907 [2024-07-22 12:28:16.595098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.907 [2024-07-22 12:28:16.595124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.907 [2024-07-22 12:28:16.595139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.907 [2024-07-22 12:28:16.595153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.907 [2024-07-22 12:28:16.595183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.907 qpair failed and we were unable to recover it. 00:33:08.907 [2024-07-22 12:28:16.604915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.907 [2024-07-22 12:28:16.605029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.907 [2024-07-22 12:28:16.605055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.907 [2024-07-22 12:28:16.605070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.907 [2024-07-22 12:28:16.605083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.907 [2024-07-22 12:28:16.605114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.907 qpair failed and we were unable to recover it. 00:33:08.907 [2024-07-22 12:28:16.614954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.907 [2024-07-22 12:28:16.615085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.907 [2024-07-22 12:28:16.615111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.907 [2024-07-22 12:28:16.615126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.615155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.615187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.625035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.625166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.625193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.625208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.625221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.625254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.635010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.635126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.635153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.635168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.635181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.635212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.645068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.645185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.645211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.645226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.645239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.645269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.655147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.655311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.655337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.655353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.655372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.655408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.665084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.665199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.665225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.665239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.665252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.665283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.675122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.675231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.675256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.675270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.675284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.675313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.685164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.685287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.685314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.685328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.685341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.685370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.695166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.695287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.695312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.695327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.695339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.695370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.705188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.705306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.705332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.705352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.705365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.705395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.715213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.715328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.715353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.715367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.715380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.715409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.725280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.725413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.725437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.725451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.725463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.725492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.735290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.735410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.908 [2024-07-22 12:28:16.735436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.908 [2024-07-22 12:28:16.735450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.908 [2024-07-22 12:28:16.735463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.908 [2024-07-22 12:28:16.735493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.908 qpair failed and we were unable to recover it. 00:33:08.908 [2024-07-22 12:28:16.745299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.908 [2024-07-22 12:28:16.745437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.745462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.745476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.745489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.745519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.755381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.755496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.755522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.755536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.755549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.755580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.765379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.765523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.765549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.765564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.765577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.765607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.775413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.775533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.775558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.775572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.775586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.775624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.785438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.785552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.785578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.785592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.785605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.785642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.795450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.795561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.795586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.795606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.795628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.795660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.805508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.805644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.805670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.805684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.805696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.805726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.815529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.815658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.815684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.815698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.815711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.815741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.825572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.825717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.825743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.825757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.825770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.825800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:08.909 [2024-07-22 12:28:16.835574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.909 [2024-07-22 12:28:16.835703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.909 [2024-07-22 12:28:16.835730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.909 [2024-07-22 12:28:16.835744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.909 [2024-07-22 12:28:16.835757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:08.909 [2024-07-22 12:28:16.835788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.909 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.845655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.845813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.845838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.845853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.845866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.170 [2024-07-22 12:28:16.845909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.170 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.855640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.855756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.855782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.855796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.855810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.170 [2024-07-22 12:28:16.855839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.170 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.865656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.865774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.865799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.865813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.865826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.170 [2024-07-22 12:28:16.865855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.170 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.875711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.875832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.875858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.875872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.875885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.170 [2024-07-22 12:28:16.875914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.170 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.885728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.885843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.885876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.885891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.885904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.170 [2024-07-22 12:28:16.885934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.170 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.895778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.895949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.895975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.895993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.896006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.170 [2024-07-22 12:28:16.896036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.170 qpair failed and we were unable to recover it. 00:33:09.170 [2024-07-22 12:28:16.905806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.170 [2024-07-22 12:28:16.905980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.170 [2024-07-22 12:28:16.906007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.170 [2024-07-22 12:28:16.906021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.170 [2024-07-22 12:28:16.906034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.906065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.915816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.915944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.915969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.915984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.915996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.916025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.925861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.925987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.926012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.926026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.926039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.926075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.935890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.936012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.936038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.936053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.936066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.936095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.945866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.945982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.946008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.946022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.946036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.946066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.955937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.956054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.956080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.956094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.956107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.956148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.965935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.966069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.966095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.966109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.966121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.966151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.975966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.976086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.976117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.976132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.976144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.976174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.986008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.986122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.986147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.986161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.986175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.986204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:16.996016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:16.996126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:16.996151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:16.996165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:16.996178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:16.996207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:17.006041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:17.006156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:17.006182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:17.006196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:17.006209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:17.006240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:17.016074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:17.016195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:17.016221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:17.016235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:17.016254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:17.016284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:17.026117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:17.026250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:17.026275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:17.026290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:17.026302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:17.026343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:17.036135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:17.036248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:17.036274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:17.036288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:17.036301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:17.036343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:17.046242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.171 [2024-07-22 12:28:17.046386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.171 [2024-07-22 12:28:17.046411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.171 [2024-07-22 12:28:17.046425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.171 [2024-07-22 12:28:17.046437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.171 [2024-07-22 12:28:17.046468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.171 qpair failed and we were unable to recover it. 00:33:09.171 [2024-07-22 12:28:17.056218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.172 [2024-07-22 12:28:17.056340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.172 [2024-07-22 12:28:17.056365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.172 [2024-07-22 12:28:17.056379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.172 [2024-07-22 12:28:17.056391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.172 [2024-07-22 12:28:17.056420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.172 qpair failed and we were unable to recover it. 00:33:09.172 [2024-07-22 12:28:17.066221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.172 [2024-07-22 12:28:17.066352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.172 [2024-07-22 12:28:17.066378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.172 [2024-07-22 12:28:17.066392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.172 [2024-07-22 12:28:17.066404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.172 [2024-07-22 12:28:17.066435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.172 qpair failed and we were unable to recover it. 00:33:09.172 [2024-07-22 12:28:17.076241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.172 [2024-07-22 12:28:17.076360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.172 [2024-07-22 12:28:17.076385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.172 [2024-07-22 12:28:17.076399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.172 [2024-07-22 12:28:17.076412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.172 [2024-07-22 12:28:17.076442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.172 qpair failed and we were unable to recover it. 00:33:09.172 [2024-07-22 12:28:17.086291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.172 [2024-07-22 12:28:17.086410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.172 [2024-07-22 12:28:17.086436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.172 [2024-07-22 12:28:17.086450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.172 [2024-07-22 12:28:17.086463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.172 [2024-07-22 12:28:17.086492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.172 qpair failed and we were unable to recover it. 00:33:09.172 [2024-07-22 12:28:17.096328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.172 [2024-07-22 12:28:17.096451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.172 [2024-07-22 12:28:17.096475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.172 [2024-07-22 12:28:17.096489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.172 [2024-07-22 12:28:17.096502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.172 [2024-07-22 12:28:17.096532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.172 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.106405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.106526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.106552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.106572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.106585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.106623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.116388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.116513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.116538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.116552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.116565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.116594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.126409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.126526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.126551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.126566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.126578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.126608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.136429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.136552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.136578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.136592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.136605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.136643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.146452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.146573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.146600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.146621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.146636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.146680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.156499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.156630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.156656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.156670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.156683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.156713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.166499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.166612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.166644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.166658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.166671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.166702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.176556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.176688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.176714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.176728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.176740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.176771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.186555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.186677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.186705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.186720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.186733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.186763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.196628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.196741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.196767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.196788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.196802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.196832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.206608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.206726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.206752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.206766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.206779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.206809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.216674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.216802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.216827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.216842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.216855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.216885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.226679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.226816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.226842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.432 [2024-07-22 12:28:17.226856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.432 [2024-07-22 12:28:17.226869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.432 [2024-07-22 12:28:17.226899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-22 12:28:17.236740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.432 [2024-07-22 12:28:17.236854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.432 [2024-07-22 12:28:17.236880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.236894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.236907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.236938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.246726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.246843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.246869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.246883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.246896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.246925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.256778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.256902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.256927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.256942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.256955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.256985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.266785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.266900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.266926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.266939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.266952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.266982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.276819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.276943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.276968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.276982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.276995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.277025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.286859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.286982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.287013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.287028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.287041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.287071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.296863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.296982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.297007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.297021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.297034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.297064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.306888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.307004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.307030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.307044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.307058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.307087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.316916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.317028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.317054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.317068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.317080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.317110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.326941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.327053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.327079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.327093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.327106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.327154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.337003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.337120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.337146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.337160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.337173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.337202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.347005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.347123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.347148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.347162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.347175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.347205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-22 12:28:17.357023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.433 [2024-07-22 12:28:17.357162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.433 [2024-07-22 12:28:17.357187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.433 [2024-07-22 12:28:17.357201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.433 [2024-07-22 12:28:17.357214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.433 [2024-07-22 12:28:17.357244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.367054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.367171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.367196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.367210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.367223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.367253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.377145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.377268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.377301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.377316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.377329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.377358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.387118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.387234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.387260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.387275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.387288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.387329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.397127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.397244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.397270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.397283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.397296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.397326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.407170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.407284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.407309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.407323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.407336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.407366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.417223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.417390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.417415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.417429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.417447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.417478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.427240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.427351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.427376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.427390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.427403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.427432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.437255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.437368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.437394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.437408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.437421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.437450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.447304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.447418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.447444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.447458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.693 [2024-07-22 12:28:17.447471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.693 [2024-07-22 12:28:17.447501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.693 qpair failed and we were unable to recover it. 00:33:09.693 [2024-07-22 12:28:17.457336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.693 [2024-07-22 12:28:17.457505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.693 [2024-07-22 12:28:17.457530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.693 [2024-07-22 12:28:17.457547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.457560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.457601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.467341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.467485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.467511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.467525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.467538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.467568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.477402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.477560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.477586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.477599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.477619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.477653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.487410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.487536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.487565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.487582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.487598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.487638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.497432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.497556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.497582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.497596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.497609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.497648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.507458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.507574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.507600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.507621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.507645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.507677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.517492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.517632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.517658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.517671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.517684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.517715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.527504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.527621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.527647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.527661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.527674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.527706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.537535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.537665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.537691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.537705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.537717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.537748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.547567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.547706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.547733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.547747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.547760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.547803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.557605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.557729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.557754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.557768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.557781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.557811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.567610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.567729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.567755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.567768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.567781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.567813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.577669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.577803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.577828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.577842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.577855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.577885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.587687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.587814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.587840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.587854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.587867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.587896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.694 qpair failed and we were unable to recover it. 00:33:09.694 [2024-07-22 12:28:17.597734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.694 [2024-07-22 12:28:17.597850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.694 [2024-07-22 12:28:17.597877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.694 [2024-07-22 12:28:17.597900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.694 [2024-07-22 12:28:17.597915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.694 [2024-07-22 12:28:17.597958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.695 qpair failed and we were unable to recover it. 00:33:09.695 [2024-07-22 12:28:17.607756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.695 [2024-07-22 12:28:17.607885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.695 [2024-07-22 12:28:17.607911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.695 [2024-07-22 12:28:17.607926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.695 [2024-07-22 12:28:17.607939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.695 [2024-07-22 12:28:17.607968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.695 qpair failed and we were unable to recover it. 00:33:09.695 [2024-07-22 12:28:17.617759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.695 [2024-07-22 12:28:17.617892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.695 [2024-07-22 12:28:17.617917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.695 [2024-07-22 12:28:17.617931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.695 [2024-07-22 12:28:17.617943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.695 [2024-07-22 12:28:17.617974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.695 qpair failed and we were unable to recover it. 00:33:09.954 [2024-07-22 12:28:17.627795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.954 [2024-07-22 12:28:17.627907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.954 [2024-07-22 12:28:17.627932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.954 [2024-07-22 12:28:17.627946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.954 [2024-07-22 12:28:17.627959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.954 [2024-07-22 12:28:17.627989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.954 qpair failed and we were unable to recover it. 00:33:09.954 [2024-07-22 12:28:17.637803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.954 [2024-07-22 12:28:17.637913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.954 [2024-07-22 12:28:17.637939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.954 [2024-07-22 12:28:17.637954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.954 [2024-07-22 12:28:17.637967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.954 [2024-07-22 12:28:17.637996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.954 qpair failed and we were unable to recover it. 00:33:09.954 [2024-07-22 12:28:17.647839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.954 [2024-07-22 12:28:17.647966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.954 [2024-07-22 12:28:17.647990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.954 [2024-07-22 12:28:17.648004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.954 [2024-07-22 12:28:17.648017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.954 [2024-07-22 12:28:17.648048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.954 qpair failed and we were unable to recover it. 00:33:09.954 [2024-07-22 12:28:17.657892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.658013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.658039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.658053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.658066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.658098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.667937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.668061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.668087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.668102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.668116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.668145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.677929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.678041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.678066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.678081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.678094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.678136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.687952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.688066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.688096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.688112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.688124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.688153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.698051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.698169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.698194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.698208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.698221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.698251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.708018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.708137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.708162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.708176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.708189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.708219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.718079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.718199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.718224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.718238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.718251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.718293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.728062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.728215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.728239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.728253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.728265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.728299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.738159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.738288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.738314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.738329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.738342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.738371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.748152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.748275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.748301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.748315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.748328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.748357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.758189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.758327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.758352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.758366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.758379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.758409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.768171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.768308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.768333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.768346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.768359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.768390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.778276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.778427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.778458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.778473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.778486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.778515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.788258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.788378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.788404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.788418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.955 [2024-07-22 12:28:17.788431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.955 [2024-07-22 12:28:17.788461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.955 qpair failed and we were unable to recover it. 00:33:09.955 [2024-07-22 12:28:17.798284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.955 [2024-07-22 12:28:17.798423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.955 [2024-07-22 12:28:17.798450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.955 [2024-07-22 12:28:17.798464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.798480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.798514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.808348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.808467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.808493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.808508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.808521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.808563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.818351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.818485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.818510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.818524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.818542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.818572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.828360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.828498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.828524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.828539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.828552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.828582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.838435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.838561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.838587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.838601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.838621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.838655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.848460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.848578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.848604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.848627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.848641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.848671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.858499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.858668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.858693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.858708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.858720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.858750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.868526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.868651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.868677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.868691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.868704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.868735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:09.956 [2024-07-22 12:28:17.878520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:09.956 [2024-07-22 12:28:17.878638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:09.956 [2024-07-22 12:28:17.878663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:09.956 [2024-07-22 12:28:17.878677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:09.956 [2024-07-22 12:28:17.878690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:09.956 [2024-07-22 12:28:17.878720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.956 qpair failed and we were unable to recover it. 00:33:10.214 [2024-07-22 12:28:17.888538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:10.214 [2024-07-22 12:28:17.888678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:10.214 [2024-07-22 12:28:17.888704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:10.214 [2024-07-22 12:28:17.888718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:10.215 [2024-07-22 12:28:17.888732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:10.215 [2024-07-22 12:28:17.888774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:10.215 qpair failed and we were unable to recover it. 00:33:10.215 [2024-07-22 12:28:17.898561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:10.215 [2024-07-22 12:28:17.898694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:10.215 [2024-07-22 12:28:17.898720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:10.215 [2024-07-22 12:28:17.898734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:10.215 [2024-07-22 12:28:17.898747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:10.215 [2024-07-22 12:28:17.898777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:10.215 qpair failed and we were unable to recover it. 00:33:10.215 [2024-07-22 12:28:17.908590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:10.215 [2024-07-22 12:28:17.908710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:10.215 [2024-07-22 12:28:17.908736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:10.215 [2024-07-22 12:28:17.908750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:10.215 [2024-07-22 12:28:17.908769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0544000b90 00:33:10.215 [2024-07-22 12:28:17.908800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:10.215 qpair failed and we were unable to recover it. 00:33:10.215 [2024-07-22 12:28:17.908905] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:10.215 A controller has encountered a failure and is being reset. 00:33:10.215 [2024-07-22 12:28:17.918662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:10.215 [2024-07-22 12:28:17.918834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:10.215 [2024-07-22 12:28:17.918866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:10.215 [2024-07-22 12:28:17.918883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:10.215 [2024-07-22 12:28:17.918896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0554000b90 00:33:10.215 [2024-07-22 12:28:17.918928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:10.215 qpair failed and we were unable to recover it. 00:33:10.215 [2024-07-22 12:28:17.928700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:10.215 [2024-07-22 12:28:17.928822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:10.215 [2024-07-22 12:28:17.928849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:10.215 [2024-07-22 12:28:17.928864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:10.215 [2024-07-22 12:28:17.928877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0554000b90 00:33:10.215 [2024-07-22 12:28:17.928919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:10.215 qpair failed and we were unable to recover it. 00:33:10.215 Controller properly reset. 00:33:10.215 Initializing NVMe Controllers 00:33:10.215 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:10.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:10.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:10.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:10.215 Initialization complete. Launching workers. 00:33:10.215 Starting thread on core 1 00:33:10.215 Starting thread on core 2 00:33:10.215 Starting thread on core 3 00:33:10.215 Starting thread on core 0 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:10.215 00:33:10.215 real 0m10.731s 00:33:10.215 user 0m18.600s 00:33:10.215 sys 0m5.158s 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.215 ************************************ 00:33:10.215 END TEST nvmf_target_disconnect_tc2 00:33:10.215 ************************************ 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:10.215 12:28:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:10.215 rmmod nvme_tcp 00:33:10.215 rmmod nvme_fabrics 00:33:10.215 rmmod nvme_keyring 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1146849 ']' 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1146849 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1146849 ']' 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1146849 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146849 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146849' 00:33:10.215 killing process with pid 1146849 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1146849 00:33:10.215 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1146849 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:10.474 12:28:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.012 12:28:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:13.012 00:33:13.012 real 0m15.503s 00:33:13.012 user 0m44.677s 00:33:13.012 sys 0m7.060s 00:33:13.012 12:28:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:13.012 12:28:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:13.012 ************************************ 00:33:13.012 END TEST nvmf_target_disconnect 00:33:13.012 ************************************ 00:33:13.012 12:28:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:13.012 12:28:20 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:13.012 12:28:20 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:13.012 12:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.012 12:28:20 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:13.012 00:33:13.012 real 27m3.799s 00:33:13.012 user 73m43.032s 00:33:13.012 sys 6m22.886s 00:33:13.012 12:28:20 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:13.012 12:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.012 ************************************ 00:33:13.012 END TEST nvmf_tcp 00:33:13.012 ************************************ 00:33:13.012 12:28:20 -- common/autotest_common.sh@1142 -- # return 0 00:33:13.012 12:28:20 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:13.012 12:28:20 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:13.012 12:28:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:13.012 12:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.012 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:33:13.012 ************************************ 00:33:13.012 START TEST spdkcli_nvmf_tcp 00:33:13.012 ************************************ 00:33:13.012 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:13.012 * Looking for test storage... 00:33:13.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:13.012 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:13.012 12:28:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:13.012 12:28:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1148039 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1148039 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1148039 ']' 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.013 [2024-07-22 12:28:20.550159] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:33:13.013 [2024-07-22 12:28:20.550260] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148039 ] 00:33:13.013 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.013 [2024-07-22 12:28:20.584763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:13.013 [2024-07-22 12:28:20.628672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:13.013 [2024-07-22 12:28:20.729691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.013 [2024-07-22 12:28:20.729700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.013 12:28:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:13.013 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:13.013 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:13.013 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:13.013 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:13.013 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:13.013 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:13.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:13.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:13.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:13.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:13.013 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:13.013 ' 00:33:16.294 [2024-07-22 12:28:23.490800] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.859 [2024-07-22 12:28:24.719092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:19.389 [2024-07-22 12:28:26.986187] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:21.293 [2024-07-22 12:28:28.956653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:22.674 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:22.674 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:22.674 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:22.674 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:22.674 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:22.674 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:22.674 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:22.674 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:22.674 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:22.674 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:22.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:22.674 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:22.674 12:28:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.242 12:28:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:23.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:23.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:23.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:23.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:23.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:23.242 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:23.242 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:23.242 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:23.242 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:23.242 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:23.242 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:23.242 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:23.242 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:23.242 ' 00:33:28.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:28.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:28.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:28.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:28.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:28.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:28.507 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:28.507 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:28.507 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:28.507 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:28.507 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:28.507 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:28.507 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:28.507 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1148039 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1148039 ']' 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1148039 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148039 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148039' 00:33:28.507 killing process with pid 1148039 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1148039 00:33:28.507 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1148039 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1148039 ']' 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1148039 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1148039 ']' 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1148039 00:33:28.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1148039) - No such process 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1148039 is not found' 00:33:28.764 Process with pid 1148039 is not found 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:28.764 00:33:28.764 real 0m16.157s 00:33:28.764 user 0m34.342s 00:33:28.764 sys 0m0.833s 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:28.764 12:28:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.764 ************************************ 00:33:28.764 END TEST spdkcli_nvmf_tcp 00:33:28.764 ************************************ 00:33:28.764 12:28:36 -- common/autotest_common.sh@1142 -- # return 0 00:33:28.764 12:28:36 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:28.764 12:28:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:28.764 12:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.764 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:28.764 ************************************ 00:33:28.764 START TEST nvmf_identify_passthru 00:33:28.764 ************************************ 00:33:28.764 12:28:36 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:28.764 * Looking for test storage... 00:33:29.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.023 12:28:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.023 12:28:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.023 12:28:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.023 12:28:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:29.023 12:28:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.023 12:28:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.023 12:28:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.023 12:28:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:29.023 12:28:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.023 12:28:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.023 12:28:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:29.023 12:28:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:29.023 12:28:36 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:29.023 12:28:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.919 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:30.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:30.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:30.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:30.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:30.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:33:30.920 00:33:30.920 --- 10.0.0.2 ping statistics --- 00:33:30.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.920 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:33:30.920 00:33:30.920 --- 10.0.0.1 ping statistics --- 00:33:30.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.920 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:30.920 12:28:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:30.920 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:30.920 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:30.920 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:30.920 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:30.921 12:28:38 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:30.921 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:30.921 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:30.921 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:30.921 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:30.921 12:28:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:30.921 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.161 12:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:35.161 12:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:35.161 12:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:35.161 12:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:35.161 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1152538 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:39.344 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1152538 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1152538 ']' 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.344 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.344 [2024-07-22 12:28:47.252494] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:33:39.344 [2024-07-22 12:28:47.252584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.603 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.603 [2024-07-22 12:28:47.296075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:39.603 [2024-07-22 12:28:47.326846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:39.603 [2024-07-22 12:28:47.420305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.603 [2024-07-22 12:28:47.420364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.603 [2024-07-22 12:28:47.420381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.603 [2024-07-22 12:28:47.420395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.603 [2024-07-22 12:28:47.420407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.603 [2024-07-22 12:28:47.420471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.603 [2024-07-22 12:28:47.420795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:39.603 [2024-07-22 12:28:47.422634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:39.603 [2024-07-22 12:28:47.422645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.603 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.603 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:33:39.603 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:39.603 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.603 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.603 INFO: Log level set to 20 00:33:39.603 INFO: Requests: 00:33:39.603 { 00:33:39.604 "jsonrpc": "2.0", 00:33:39.604 "method": "nvmf_set_config", 00:33:39.604 "id": 1, 00:33:39.604 "params": { 00:33:39.604 "admin_cmd_passthru": { 00:33:39.604 "identify_ctrlr": true 00:33:39.604 } 00:33:39.604 } 00:33:39.604 } 00:33:39.604 00:33:39.604 INFO: response: 00:33:39.604 { 00:33:39.604 "jsonrpc": "2.0", 00:33:39.604 "id": 1, 00:33:39.604 "result": true 00:33:39.604 } 00:33:39.604 00:33:39.604 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.604 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:39.604 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.604 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.604 INFO: Setting log level to 20 00:33:39.604 INFO: Setting log level to 20 00:33:39.604 INFO: Log level set to 20 00:33:39.604 INFO: Log level set to 20 00:33:39.604 INFO: Requests: 00:33:39.604 { 00:33:39.604 "jsonrpc": "2.0", 00:33:39.604 "method": "framework_start_init", 00:33:39.604 "id": 1 00:33:39.604 } 00:33:39.604 00:33:39.604 INFO: Requests: 00:33:39.604 { 00:33:39.604 "jsonrpc": "2.0", 00:33:39.604 "method": "framework_start_init", 00:33:39.604 "id": 1 00:33:39.604 } 00:33:39.604 00:33:39.862 [2024-07-22 12:28:47.609975] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:39.862 INFO: response: 00:33:39.862 { 00:33:39.862 "jsonrpc": "2.0", 00:33:39.862 "id": 1, 00:33:39.862 "result": true 00:33:39.862 } 00:33:39.862 00:33:39.862 INFO: response: 00:33:39.862 { 00:33:39.862 "jsonrpc": "2.0", 00:33:39.862 "id": 1, 00:33:39.862 "result": true 00:33:39.862 } 00:33:39.862 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.862 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.862 INFO: Setting log level to 40 00:33:39.862 INFO: Setting log level to 40 00:33:39.862 INFO: Setting log level to 40 00:33:39.862 [2024-07-22 12:28:47.620131] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.862 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:39.862 12:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.862 12:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.147 Nvme0n1 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.147 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.147 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.147 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.147 [2024-07-22 12:28:50.520006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.147 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.147 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.147 [ 00:33:43.147 { 00:33:43.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:43.147 "subtype": "Discovery", 00:33:43.147 "listen_addresses": [], 00:33:43.147 "allow_any_host": true, 00:33:43.147 "hosts": [] 00:33:43.147 }, 00:33:43.147 { 00:33:43.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.147 "subtype": "NVMe", 00:33:43.147 "listen_addresses": [ 00:33:43.147 { 00:33:43.147 "trtype": "TCP", 00:33:43.147 "adrfam": "IPv4", 00:33:43.147 "traddr": "10.0.0.2", 00:33:43.147 "trsvcid": "4420" 00:33:43.147 } 00:33:43.147 ], 00:33:43.147 "allow_any_host": true, 00:33:43.147 "hosts": [], 00:33:43.147 "serial_number": "SPDK00000000000001", 00:33:43.147 "model_number": "SPDK bdev Controller", 00:33:43.147 "max_namespaces": 1, 00:33:43.147 "min_cntlid": 1, 00:33:43.147 "max_cntlid": 65519, 00:33:43.147 "namespaces": [ 00:33:43.147 { 00:33:43.147 "nsid": 1, 00:33:43.147 "bdev_name": "Nvme0n1", 00:33:43.147 "name": "Nvme0n1", 00:33:43.148 "nguid": "481F371B5A4D4669B2169E0E0042CDE1", 00:33:43.148 "uuid": "481f371b-5a4d-4669-b216-9e0e0042cde1" 00:33:43.148 } 00:33:43.148 ] 00:33:43.148 } 00:33:43.148 ] 00:33:43.148 12:28:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:43.148 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:43.148 12:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:43.148 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.148 12:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:43.148 12:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:33:43.148 12:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:43.148 12:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.148 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.148 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.406 12:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:43.406 12:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:43.406 rmmod nvme_tcp 00:33:43.406 rmmod nvme_fabrics 00:33:43.406 rmmod nvme_keyring 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1152538 ']' 00:33:43.406 12:28:51 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1152538 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1152538 ']' 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1152538 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152538 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152538' 00:33:43.406 killing process with pid 1152538 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1152538 00:33:43.406 12:28:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1152538 00:33:45.311 12:28:52 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.311 12:28:52 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:45.311 12:28:52 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:45.311 12:28:52 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:45.311 12:28:52 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:45.311 12:28:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.311 12:28:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:45.311 12:28:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.217 12:28:54 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:47.217 00:33:47.217 real 0m18.136s 00:33:47.217 user 0m27.585s 00:33:47.217 sys 0m2.299s 00:33:47.217 12:28:54 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:47.217 12:28:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.217 ************************************ 00:33:47.217 END TEST nvmf_identify_passthru 00:33:47.217 ************************************ 00:33:47.217 12:28:54 -- common/autotest_common.sh@1142 -- # return 0 00:33:47.217 12:28:54 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:47.217 12:28:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:47.217 12:28:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.217 12:28:54 -- common/autotest_common.sh@10 -- # set +x 00:33:47.217 ************************************ 00:33:47.217 START TEST nvmf_dif 00:33:47.217 ************************************ 00:33:47.217 12:28:54 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:47.217 * Looking for test storage... 00:33:47.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:47.217 12:28:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.217 12:28:54 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.217 12:28:54 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.217 12:28:54 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.217 12:28:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.217 12:28:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.217 12:28:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.217 12:28:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:47.217 12:28:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:47.217 12:28:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:47.217 12:28:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:47.217 12:28:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:47.217 12:28:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:47.217 12:28:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.217 12:28:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:47.217 12:28:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:47.217 12:28:54 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:33:47.217 12:28:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:49.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:49.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:49.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:49.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:49.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:33:49.117 00:33:49.117 --- 10.0.0.2 ping statistics --- 00:33:49.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.117 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:49.117 12:28:56 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:33:49.118 00:33:49.118 --- 10.0.0.1 ping statistics --- 00:33:49.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.118 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:33:49.118 12:28:56 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.118 12:28:56 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:33:49.118 12:28:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:49.118 12:28:56 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:50.054 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:50.054 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:50.054 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:50.054 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:50.054 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:50.054 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:50.054 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:50.054 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:50.054 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:50.054 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:50.054 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:50.054 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:50.054 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:50.054 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:50.054 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:50.054 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:50.054 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:50.312 12:28:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:50.312 12:28:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:50.312 12:28:58 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:50.312 12:28:58 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:50.312 12:28:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:50.313 12:28:58 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1155798 00:33:50.313 12:28:58 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:50.313 12:28:58 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1155798 00:33:50.313 12:28:58 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1155798 ']' 00:33:50.313 12:28:58 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.313 12:28:58 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:50.313 12:28:58 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.313 12:28:58 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:50.313 12:28:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:50.313 [2024-07-22 12:28:58.178351] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:33:50.313 [2024-07-22 12:28:58.178419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.313 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.313 [2024-07-22 12:28:58.216500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:50.313 [2024-07-22 12:28:58.242704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.570 [2024-07-22 12:28:58.326907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.570 [2024-07-22 12:28:58.326960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.570 [2024-07-22 12:28:58.326982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.570 [2024-07-22 12:28:58.326994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.570 [2024-07-22 12:28:58.327004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.570 [2024-07-22 12:28:58.327029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:33:50.570 12:28:58 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:50.570 12:28:58 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.570 12:28:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:50.570 12:28:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:50.570 [2024-07-22 12:28:58.466537] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.570 12:28:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:50.570 12:28:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:50.570 ************************************ 00:33:50.570 START TEST fio_dif_1_default 00:33:50.570 ************************************ 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.570 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:50.828 bdev_null0 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:50.828 [2024-07-22 12:28:58.526906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:50.828 { 00:33:50.828 "params": { 00:33:50.828 "name": "Nvme$subsystem", 00:33:50.828 "trtype": "$TEST_TRANSPORT", 00:33:50.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.828 "adrfam": "ipv4", 00:33:50.828 "trsvcid": "$NVMF_PORT", 00:33:50.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.828 "hdgst": ${hdgst:-false}, 00:33:50.828 "ddgst": ${ddgst:-false} 00:33:50.828 }, 00:33:50.828 "method": "bdev_nvme_attach_controller" 00:33:50.828 } 00:33:50.828 EOF 00:33:50.828 )") 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:50.828 "params": { 00:33:50.828 "name": "Nvme0", 00:33:50.828 "trtype": "tcp", 00:33:50.828 "traddr": "10.0.0.2", 00:33:50.828 "adrfam": "ipv4", 00:33:50.828 "trsvcid": "4420", 00:33:50.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:50.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:50.828 "hdgst": false, 00:33:50.828 "ddgst": false 00:33:50.828 }, 00:33:50.828 "method": "bdev_nvme_attach_controller" 00:33:50.828 }' 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:50.828 12:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.087 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:51.087 fio-3.35 00:33:51.087 Starting 1 thread 00:33:51.087 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.326 00:34:03.326 filename0: (groupid=0, jobs=1): err= 0: pid=1156021: Mon Jul 22 12:29:09 2024 00:34:03.326 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:03.326 slat (nsec): min=5138, max=66424, avg=8933.00, stdev=3889.99 00:34:03.326 clat (usec): min=40836, max=46188, avg=41001.89, stdev=341.52 00:34:03.326 lat (usec): min=40844, max=46238, avg=41010.83, stdev=342.50 00:34:03.326 clat percentiles (usec): 00:34:03.326 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:03.326 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:03.326 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:03.326 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:34:03.326 | 99.99th=[46400] 00:34:03.326 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:34:03.326 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:03.326 lat (msec) : 50=100.00% 00:34:03.326 cpu : usr=89.17%, sys=10.56%, ctx=16, majf=0, minf=286 00:34:03.326 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.326 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.326 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:03.326 00:34:03.326 Run status group 0 (all jobs): 00:34:03.326 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:03.326 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 00:34:03.327 real 0m11.109s 00:34:03.327 user 0m9.964s 00:34:03.327 sys 0m1.343s 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 ************************************ 00:34:03.327 END TEST fio_dif_1_default 00:34:03.327 ************************************ 00:34:03.327 12:29:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:03.327 12:29:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:03.327 12:29:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.327 12:29:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 ************************************ 00:34:03.327 START TEST fio_dif_1_multi_subsystems 00:34:03.327 ************************************ 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 bdev_null0 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 [2024-07-22 12:29:09.690722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 bdev_null1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.327 { 00:34:03.327 "params": { 00:34:03.327 "name": "Nvme$subsystem", 00:34:03.327 "trtype": "$TEST_TRANSPORT", 00:34:03.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.327 "adrfam": "ipv4", 00:34:03.327 "trsvcid": "$NVMF_PORT", 00:34:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.327 "hdgst": ${hdgst:-false}, 00:34:03.327 "ddgst": ${ddgst:-false} 00:34:03.327 }, 00:34:03.327 "method": "bdev_nvme_attach_controller" 00:34:03.327 } 00:34:03.327 EOF 00:34:03.327 )") 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.327 { 00:34:03.327 "params": { 00:34:03.327 "name": "Nvme$subsystem", 00:34:03.327 "trtype": "$TEST_TRANSPORT", 00:34:03.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.327 "adrfam": "ipv4", 00:34:03.327 "trsvcid": "$NVMF_PORT", 00:34:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.327 "hdgst": ${hdgst:-false}, 00:34:03.327 "ddgst": ${ddgst:-false} 00:34:03.327 }, 00:34:03.327 "method": "bdev_nvme_attach_controller" 00:34:03.327 } 00:34:03.327 EOF 00:34:03.327 )") 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:03.327 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:03.327 "params": { 00:34:03.327 "name": "Nvme0", 00:34:03.327 "trtype": "tcp", 00:34:03.327 "traddr": "10.0.0.2", 00:34:03.327 "adrfam": "ipv4", 00:34:03.327 "trsvcid": "4420", 00:34:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:03.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:03.327 "hdgst": false, 00:34:03.327 "ddgst": false 00:34:03.327 }, 00:34:03.327 "method": "bdev_nvme_attach_controller" 00:34:03.327 },{ 00:34:03.327 "params": { 00:34:03.327 "name": "Nvme1", 00:34:03.327 "trtype": "tcp", 00:34:03.327 "traddr": "10.0.0.2", 00:34:03.327 "adrfam": "ipv4", 00:34:03.327 "trsvcid": "4420", 00:34:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:03.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:03.327 "hdgst": false, 00:34:03.328 "ddgst": false 00:34:03.328 }, 00:34:03.328 "method": "bdev_nvme_attach_controller" 00:34:03.328 }' 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:03.328 12:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.328 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:03.328 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:03.328 fio-3.35 00:34:03.328 Starting 2 threads 00:34:03.328 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.296 00:34:13.296 filename0: (groupid=0, jobs=1): err= 0: pid=1157430: Mon Jul 22 12:29:20 2024 00:34:13.296 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:34:13.296 slat (nsec): min=6983, max=44864, avg=10594.00, stdev=5073.32 00:34:13.296 clat (usec): min=698, max=46072, avg=21066.00, stdev=20205.02 00:34:13.296 lat (usec): min=706, max=46110, avg=21076.59, stdev=20204.52 00:34:13.296 clat percentiles (usec): 00:34:13.296 | 1.00th=[ 725], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 758], 00:34:13.296 | 30.00th=[ 791], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:34:13.296 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:13.296 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:13.296 | 99.99th=[45876] 00:34:13.296 bw ( KiB/s): min= 672, max= 768, per=50.05%, avg=759.58, stdev=25.78, samples=19 00:34:13.296 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:34:13.296 lat (usec) : 750=12.87%, 1000=36.50% 00:34:13.296 lat (msec) : 2=0.42%, 50=50.21% 00:34:13.296 cpu : usr=97.57%, sys=2.13%, ctx=25, majf=0, minf=113 00:34:13.296 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.296 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.296 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:13.296 filename1: (groupid=0, jobs=1): err= 0: pid=1157431: Mon Jul 22 12:29:20 2024 00:34:13.296 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:34:13.296 slat (nsec): min=4673, max=94077, avg=11145.95, stdev=5088.04 00:34:13.296 clat (usec): min=801, max=46762, avg=21064.47, stdev=20117.17 00:34:13.296 lat (usec): min=810, max=46793, avg=21075.62, stdev=20116.22 00:34:13.296 clat percentiles (usec): 00:34:13.296 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 840], 20.00th=[ 857], 00:34:13.296 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:34:13.296 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:13.296 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:34:13.296 | 99.99th=[46924] 00:34:13.296 bw ( KiB/s): min= 672, max= 768, per=50.05%, avg=759.58, stdev=25.78, samples=19 00:34:13.296 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:34:13.296 lat (usec) : 1000=49.79% 00:34:13.296 lat (msec) : 50=50.21% 00:34:13.296 cpu : usr=97.08%, sys=2.63%, ctx=15, majf=0, minf=172 00:34:13.296 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.296 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.296 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:13.296 00:34:13.296 Run status group 0 (all jobs): 00:34:13.296 READ: bw=1516KiB/s (1553kB/s), 758KiB/s-758KiB/s (776kB/s-777kB/s), io=14.8MiB (15.5MB), run=10001-10002msec 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.296 00:34:13.296 real 0m11.448s 00:34:13.296 user 0m20.950s 00:34:13.296 sys 0m0.777s 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:13.296 12:29:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.296 ************************************ 00:34:13.296 END TEST fio_dif_1_multi_subsystems 00:34:13.296 ************************************ 00:34:13.296 12:29:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:13.296 12:29:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:13.296 12:29:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:13.296 12:29:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.296 12:29:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:13.296 ************************************ 00:34:13.296 START TEST fio_dif_rand_params 00:34:13.296 ************************************ 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.297 bdev_null0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.297 [2024-07-22 12:29:21.191253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:13.297 { 00:34:13.297 "params": { 00:34:13.297 "name": "Nvme$subsystem", 00:34:13.297 "trtype": "$TEST_TRANSPORT", 00:34:13.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.297 "adrfam": "ipv4", 00:34:13.297 "trsvcid": "$NVMF_PORT", 00:34:13.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.297 "hdgst": ${hdgst:-false}, 00:34:13.297 "ddgst": ${ddgst:-false} 00:34:13.297 }, 00:34:13.297 "method": "bdev_nvme_attach_controller" 00:34:13.297 } 00:34:13.297 EOF 00:34:13.297 )") 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:13.297 "params": { 00:34:13.297 "name": "Nvme0", 00:34:13.297 "trtype": "tcp", 00:34:13.297 "traddr": "10.0.0.2", 00:34:13.297 "adrfam": "ipv4", 00:34:13.297 "trsvcid": "4420", 00:34:13.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:13.297 "hdgst": false, 00:34:13.297 "ddgst": false 00:34:13.297 }, 00:34:13.297 "method": "bdev_nvme_attach_controller" 00:34:13.297 }' 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:13.297 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:13.554 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:13.554 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:13.554 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:13.554 12:29:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.554 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:13.554 ... 00:34:13.554 fio-3.35 00:34:13.554 Starting 3 threads 00:34:13.554 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.109 00:34:20.109 filename0: (groupid=0, jobs=1): err= 0: pid=1158822: Mon Jul 22 12:29:27 2024 00:34:20.109 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(129MiB/5012msec) 00:34:20.109 slat (nsec): min=5087, max=56582, avg=15910.46, stdev=6386.07 00:34:20.109 clat (usec): min=5378, max=92428, avg=14530.52, stdev=11986.43 00:34:20.109 lat (usec): min=5391, max=92466, avg=14546.43, stdev=11986.61 00:34:20.109 clat percentiles (usec): 00:34:20.109 | 1.00th=[ 5997], 5.00th=[ 7046], 10.00th=[ 8029], 20.00th=[ 8979], 00:34:20.109 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11207], 60.00th=[12125], 00:34:20.109 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16581], 95.00th=[50070], 00:34:20.109 | 99.00th=[54789], 99.50th=[60556], 99.90th=[90702], 99.95th=[92799], 00:34:20.109 | 99.99th=[92799] 00:34:20.109 bw ( KiB/s): min=20224, max=33792, per=33.56%, avg=26368.00, stdev=4434.05, samples=10 00:34:20.109 iops : min= 158, max= 264, avg=206.00, stdev=34.64, samples=10 00:34:20.109 lat (msec) : 10=35.62%, 20=55.95%, 50=3.48%, 100=4.94% 00:34:20.109 cpu : usr=90.40%, sys=8.60%, ctx=118, majf=0, minf=94 00:34:20.109 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.109 issued rwts: total=1033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.109 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:20.109 filename0: (groupid=0, jobs=1): err= 0: pid=1158823: Mon Jul 22 12:29:27 2024 00:34:20.109 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(139MiB/5020msec) 00:34:20.109 slat (nsec): min=4698, max=43769, avg=14382.73, stdev=4482.59 00:34:20.109 clat (usec): min=5044, max=56765, avg=13499.24, stdev=9963.41 00:34:20.109 lat (usec): min=5056, max=56778, avg=13513.62, stdev=9963.39 00:34:20.109 clat percentiles (usec): 00:34:20.109 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 8586], 00:34:20.109 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11469], 60.00th=[12649], 00:34:20.109 | 70.00th=[13435], 80.00th=[14746], 90.00th=[16581], 95.00th=[48497], 00:34:20.109 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[56886], 00:34:20.109 | 99.99th=[56886] 00:34:20.109 bw ( KiB/s): min=21760, max=40704, per=36.21%, avg=28448.10, stdev=5912.70, samples=10 00:34:20.109 iops : min= 170, max= 318, avg=222.20, stdev=46.15, samples=10 00:34:20.109 lat (msec) : 10=38.96%, 20=55.12%, 50=2.51%, 100=3.41% 00:34:20.109 cpu : usr=90.81%, sys=8.73%, ctx=13, majf=0, minf=92 00:34:20.109 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.109 issued rwts: total=1114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.109 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:20.109 filename0: (groupid=0, jobs=1): err= 0: pid=1158824: Mon Jul 22 12:29:27 2024 00:34:20.109 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(118MiB/5031msec) 00:34:20.109 slat (nsec): min=4698, max=47866, avg=14333.37, stdev=4689.28 00:34:20.109 clat (usec): min=5782, max=93950, avg=16018.27, stdev=13935.97 00:34:20.109 lat (usec): min=5795, max=93964, avg=16032.60, stdev=13935.84 00:34:20.109 clat percentiles (usec): 00:34:20.109 | 1.00th=[ 6128], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9503], 00:34:20.109 | 30.00th=[10290], 40.00th=[11076], 50.00th=[12125], 60.00th=[12780], 00:34:20.109 | 70.00th=[13435], 80.00th=[14484], 90.00th=[47973], 95.00th=[52691], 00:34:20.109 | 99.00th=[55837], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:34:20.109 | 99.99th=[93848] 00:34:20.109 bw ( KiB/s): min=13056, max=29696, per=30.56%, avg=24012.80, stdev=5242.00, samples=10 00:34:20.109 iops : min= 102, max= 232, avg=187.60, stdev=40.95, samples=10 00:34:20.109 lat (msec) : 10=26.78%, 20=62.91%, 50=2.34%, 100=7.97% 00:34:20.109 cpu : usr=90.80%, sys=8.77%, ctx=15, majf=0, minf=126 00:34:20.109 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.109 issued rwts: total=941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.109 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:20.109 00:34:20.109 Run status group 0 (all jobs): 00:34:20.109 READ: bw=76.7MiB/s (80.5MB/s), 23.4MiB/s-27.7MiB/s (24.5MB/s-29.1MB/s), io=386MiB (405MB), run=5012-5031msec 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.109 bdev_null0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:20.109 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 [2024-07-22 12:29:27.447320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 bdev_null1 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 bdev_null2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:20.110 { 00:34:20.110 "params": { 00:34:20.110 "name": "Nvme$subsystem", 00:34:20.110 "trtype": "$TEST_TRANSPORT", 00:34:20.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.110 "adrfam": "ipv4", 00:34:20.110 "trsvcid": "$NVMF_PORT", 00:34:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.110 "hdgst": ${hdgst:-false}, 00:34:20.110 "ddgst": ${ddgst:-false} 00:34:20.110 }, 00:34:20.110 "method": "bdev_nvme_attach_controller" 00:34:20.110 } 00:34:20.110 EOF 00:34:20.110 )") 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:20.110 { 00:34:20.110 "params": { 00:34:20.110 "name": "Nvme$subsystem", 00:34:20.110 "trtype": "$TEST_TRANSPORT", 00:34:20.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.110 "adrfam": "ipv4", 00:34:20.110 "trsvcid": "$NVMF_PORT", 00:34:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.110 "hdgst": ${hdgst:-false}, 00:34:20.110 "ddgst": ${ddgst:-false} 00:34:20.110 }, 00:34:20.110 "method": "bdev_nvme_attach_controller" 00:34:20.110 } 00:34:20.110 EOF 00:34:20.110 )") 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:20.110 { 00:34:20.110 "params": { 00:34:20.110 "name": "Nvme$subsystem", 00:34:20.110 "trtype": "$TEST_TRANSPORT", 00:34:20.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.110 "adrfam": "ipv4", 00:34:20.110 "trsvcid": "$NVMF_PORT", 00:34:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.110 "hdgst": ${hdgst:-false}, 00:34:20.110 "ddgst": ${ddgst:-false} 00:34:20.110 }, 00:34:20.110 "method": "bdev_nvme_attach_controller" 00:34:20.110 } 00:34:20.110 EOF 00:34:20.110 )") 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:20.110 "params": { 00:34:20.110 "name": "Nvme0", 00:34:20.110 "trtype": "tcp", 00:34:20.110 "traddr": "10.0.0.2", 00:34:20.110 "adrfam": "ipv4", 00:34:20.110 "trsvcid": "4420", 00:34:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:20.110 "hdgst": false, 00:34:20.110 "ddgst": false 00:34:20.110 }, 00:34:20.110 "method": "bdev_nvme_attach_controller" 00:34:20.110 },{ 00:34:20.110 "params": { 00:34:20.110 "name": "Nvme1", 00:34:20.110 "trtype": "tcp", 00:34:20.110 "traddr": "10.0.0.2", 00:34:20.110 "adrfam": "ipv4", 00:34:20.110 "trsvcid": "4420", 00:34:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:20.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:20.110 "hdgst": false, 00:34:20.110 "ddgst": false 00:34:20.110 }, 00:34:20.110 "method": "bdev_nvme_attach_controller" 00:34:20.110 },{ 00:34:20.110 "params": { 00:34:20.110 "name": "Nvme2", 00:34:20.110 "trtype": "tcp", 00:34:20.110 "traddr": "10.0.0.2", 00:34:20.110 "adrfam": "ipv4", 00:34:20.110 "trsvcid": "4420", 00:34:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:20.110 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:20.110 "hdgst": false, 00:34:20.110 "ddgst": false 00:34:20.110 }, 00:34:20.110 "method": "bdev_nvme_attach_controller" 00:34:20.110 }' 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:20.110 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.110 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:20.110 ... 00:34:20.110 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:20.110 ... 00:34:20.110 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:20.110 ... 00:34:20.110 fio-3.35 00:34:20.110 Starting 24 threads 00:34:20.110 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.317 00:34:32.317 filename0: (groupid=0, jobs=1): err= 0: pid=1159688: Mon Jul 22 12:29:38 2024 00:34:32.317 read: IOPS=444, BW=1777KiB/s (1820kB/s)(17.4MiB/10028msec) 00:34:32.317 slat (nsec): min=6145, max=86229, avg=20927.52, stdev=13079.69 00:34:32.317 clat (usec): min=12003, max=53955, avg=35852.70, stdev=5096.63 00:34:32.317 lat (usec): min=12027, max=54018, avg=35873.63, stdev=5098.89 00:34:32.317 clat percentiles (usec): 00:34:32.317 | 1.00th=[21890], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:34:32.318 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:32.318 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.318 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:32.318 | 99.99th=[53740] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2116, per=4.21%, avg=1776.20, stdev=225.34, samples=20 00:34:32.318 iops : min= 352, max= 529, avg=444.05, stdev=56.33, samples=20 00:34:32.318 lat (msec) : 20=0.49%, 50=99.46%, 100=0.04% 00:34:32.318 cpu : usr=98.23%, sys=1.36%, ctx=15, majf=0, minf=81 00:34:32.318 IO depths : 1=3.5%, 2=9.5%, 4=24.3%, 8=53.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159689: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=446, BW=1787KiB/s (1830kB/s)(17.5MiB/10015msec) 00:34:32.318 slat (usec): min=4, max=423, avg=23.32, stdev=27.61 00:34:32.318 clat (usec): min=20148, max=69406, avg=35674.15, stdev=5664.27 00:34:32.318 lat (usec): min=20163, max=69431, avg=35697.46, stdev=5662.26 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[25297], 5.00th=[28705], 10.00th=[32375], 20.00th=[32637], 00:34:32.318 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:32.318 | 70.00th=[36439], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.318 | 99.00th=[53216], 99.50th=[57934], 99.90th=[63701], 99.95th=[63701], 00:34:32.318 | 99.99th=[69731] 00:34:32.318 bw ( KiB/s): min= 1504, max= 2032, per=4.21%, avg=1779.37, stdev=186.86, samples=19 00:34:32.318 iops : min= 376, max= 508, avg=444.84, stdev=46.72, samples=19 00:34:32.318 lat (msec) : 50=97.45%, 100=2.55% 00:34:32.318 cpu : usr=94.94%, sys=2.91%, ctx=76, majf=0, minf=63 00:34:32.318 IO depths : 1=0.1%, 2=2.8%, 4=12.2%, 8=70.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=91.3%, 8=5.3%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159690: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=440, BW=1761KiB/s (1804kB/s)(17.2MiB/10010msec) 00:34:32.318 slat (usec): min=8, max=122, avg=37.37, stdev=19.98 00:34:32.318 clat (usec): min=20425, max=72811, avg=36069.99, stdev=5100.20 00:34:32.318 lat (usec): min=20446, max=72829, avg=36107.36, stdev=5097.75 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[22938], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.318 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:34:32.318 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:34:32.318 | 99.00th=[44303], 99.50th=[50070], 99.90th=[62129], 99.95th=[62129], 00:34:32.318 | 99.99th=[72877] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2016, per=4.14%, avg=1748.37, stdev=210.18, samples=19 00:34:32.318 iops : min= 352, max= 504, avg=437.05, stdev=52.54, samples=19 00:34:32.318 lat (msec) : 50=99.55%, 100=0.45% 00:34:32.318 cpu : usr=98.03%, sys=1.44%, ctx=76, majf=0, minf=63 00:34:32.318 IO depths : 1=3.4%, 2=6.9%, 4=14.3%, 8=63.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=92.0%, 8=4.7%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159691: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=439, BW=1757KiB/s (1800kB/s)(17.2MiB/10015msec) 00:34:32.318 slat (nsec): min=8268, max=71836, avg=25511.77, stdev=11772.86 00:34:32.318 clat (usec): min=31600, max=52211, avg=36214.02, stdev=4680.66 00:34:32.318 lat (usec): min=31624, max=52240, avg=36239.53, stdev=4679.64 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:32.318 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:34:32.318 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.318 | 99.00th=[44303], 99.50th=[44303], 99.90th=[52167], 99.95th=[52167], 00:34:32.318 | 99.99th=[52167] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1753.60, stdev=220.14, samples=20 00:34:32.318 iops : min= 352, max= 512, avg=438.40, stdev=55.04, samples=20 00:34:32.318 lat (msec) : 50=99.64%, 100=0.36% 00:34:32.318 cpu : usr=98.32%, sys=1.27%, ctx=31, majf=0, minf=44 00:34:32.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159692: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10005msec) 00:34:32.318 slat (usec): min=10, max=106, avg=41.51, stdev=16.44 00:34:32.318 clat (usec): min=27349, max=45170, avg=36011.25, stdev=4642.30 00:34:32.318 lat (usec): min=27366, max=45193, avg=36052.76, stdev=4639.08 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:32.318 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.318 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:32.318 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[45351], 00:34:32.318 | 99.99th=[45351] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1751.58, stdev=213.56, samples=19 00:34:32.318 iops : min= 352, max= 512, avg=437.89, stdev=53.39, samples=19 00:34:32.318 lat (msec) : 50=100.00% 00:34:32.318 cpu : usr=95.84%, sys=2.55%, ctx=86, majf=0, minf=54 00:34:32.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159693: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10008msec) 00:34:32.318 slat (usec): min=9, max=116, avg=40.31, stdev=19.10 00:34:32.318 clat (usec): min=13501, max=65248, avg=36031.71, stdev=5156.67 00:34:32.318 lat (usec): min=13524, max=65295, avg=36072.02, stdev=5150.59 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:32.318 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.318 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:34:32.318 | 99.00th=[44303], 99.50th=[44303], 99.90th=[65274], 99.95th=[65274], 00:34:32.318 | 99.99th=[65274] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2048, per=4.13%, avg=1744.84, stdev=201.08, samples=19 00:34:32.318 iops : min= 352, max= 512, avg=436.21, stdev=50.27, samples=19 00:34:32.318 lat (msec) : 20=0.36%, 50=99.23%, 100=0.41% 00:34:32.318 cpu : usr=98.18%, sys=1.41%, ctx=15, majf=0, minf=61 00:34:32.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159694: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=442, BW=1772KiB/s (1814kB/s)(17.4MiB/10041msec) 00:34:32.318 slat (usec): min=8, max=133, avg=31.25, stdev=10.81 00:34:32.318 clat (usec): min=9291, max=44473, avg=35835.40, stdev=5065.59 00:34:32.318 lat (usec): min=9302, max=44496, avg=35866.66, stdev=5065.44 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[25560], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.318 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:34:32.318 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:34:32.318 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:34:32.318 | 99.99th=[44303] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2048, per=4.20%, avg=1773.25, stdev=220.59, samples=20 00:34:32.318 iops : min= 352, max= 512, avg=443.20, stdev=55.21, samples=20 00:34:32.318 lat (msec) : 10=0.36%, 20=0.36%, 50=99.28% 00:34:32.318 cpu : usr=90.63%, sys=4.87%, ctx=739, majf=0, minf=67 00:34:32.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename0: (groupid=0, jobs=1): err= 0: pid=1159695: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10005msec) 00:34:32.318 slat (usec): min=6, max=106, avg=39.16, stdev=19.36 00:34:32.318 clat (usec): min=30858, max=47605, avg=36029.95, stdev=4695.50 00:34:32.318 lat (usec): min=30948, max=47625, avg=36069.11, stdev=4688.67 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:32.318 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.318 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.318 | 99.00th=[43779], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:34:32.318 | 99.99th=[47449] 00:34:32.318 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1751.74, stdev=226.01, samples=19 00:34:32.318 iops : min= 352, max= 512, avg=437.89, stdev=56.50, samples=19 00:34:32.318 lat (msec) : 50=100.00% 00:34:32.318 cpu : usr=98.27%, sys=1.30%, ctx=15, majf=0, minf=53 00:34:32.318 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.318 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.318 filename1: (groupid=0, jobs=1): err= 0: pid=1159696: Mon Jul 22 12:29:38 2024 00:34:32.318 read: IOPS=441, BW=1768KiB/s (1810kB/s)(17.3MiB/10029msec) 00:34:32.318 slat (usec): min=15, max=112, avg=39.64, stdev=16.06 00:34:32.318 clat (usec): min=11898, max=45003, avg=35809.57, stdev=4876.92 00:34:32.318 lat (usec): min=11935, max=45070, avg=35849.20, stdev=4878.94 00:34:32.318 clat percentiles (usec): 00:34:32.318 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:32.318 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.319 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:32.319 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:34:32.319 | 99.99th=[44827] 00:34:32.319 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1766.40, stdev=214.18, samples=20 00:34:32.319 iops : min= 352, max= 512, avg=441.60, stdev=53.55, samples=20 00:34:32.319 lat (msec) : 20=0.72%, 50=99.28% 00:34:32.319 cpu : usr=98.46%, sys=1.08%, ctx=18, majf=0, minf=63 00:34:32.319 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159697: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=440, BW=1763KiB/s (1805kB/s)(17.2MiB/10022msec) 00:34:32.319 slat (usec): min=8, max=117, avg=32.63, stdev=27.29 00:34:32.319 clat (usec): min=22809, max=45143, avg=36026.61, stdev=4722.08 00:34:32.319 lat (usec): min=22846, max=45164, avg=36059.24, stdev=4707.70 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:32.319 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[43779], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:34:32.319 | 99.99th=[45351] 00:34:32.319 bw ( KiB/s): min= 1536, max= 2048, per=4.17%, avg=1760.00, stdev=194.23, samples=20 00:34:32.319 iops : min= 384, max= 512, avg=440.00, stdev=48.56, samples=20 00:34:32.319 lat (msec) : 50=100.00% 00:34:32.319 cpu : usr=98.05%, sys=1.41%, ctx=87, majf=0, minf=37 00:34:32.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159698: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=439, BW=1759KiB/s (1802kB/s)(17.2MiB/10004msec) 00:34:32.319 slat (usec): min=7, max=107, avg=38.30, stdev=20.45 00:34:32.319 clat (usec): min=24204, max=57360, avg=36025.63, stdev=4725.65 00:34:32.319 lat (usec): min=24251, max=57377, avg=36063.92, stdev=4717.66 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:32.319 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[43779], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:34:32.319 | 99.99th=[57410] 00:34:32.319 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1751.74, stdev=226.01, samples=19 00:34:32.319 iops : min= 352, max= 512, avg=437.89, stdev=56.50, samples=19 00:34:32.319 lat (msec) : 50=99.95%, 100=0.05% 00:34:32.319 cpu : usr=95.05%, sys=2.77%, ctx=91, majf=0, minf=41 00:34:32.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159699: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=443, BW=1774KiB/s (1816kB/s)(17.4MiB/10031msec) 00:34:32.319 slat (nsec): min=8381, max=81268, avg=31886.50, stdev=13402.99 00:34:32.319 clat (usec): min=9236, max=45158, avg=35818.52, stdev=5160.24 00:34:32.319 lat (usec): min=9258, max=45181, avg=35850.41, stdev=5161.50 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[20055], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.319 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[45351], 00:34:32.319 | 99.99th=[45351] 00:34:32.319 bw ( KiB/s): min= 1410, max= 2048, per=4.20%, avg=1773.20, stdev=220.59, samples=20 00:34:32.319 iops : min= 352, max= 512, avg=443.20, stdev=55.21, samples=20 00:34:32.319 lat (msec) : 10=0.36%, 20=0.67%, 50=98.97% 00:34:32.319 cpu : usr=92.75%, sys=3.79%, ctx=120, majf=0, minf=67 00:34:32.319 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159700: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10008msec) 00:34:32.319 slat (nsec): min=9045, max=85296, avg=35704.52, stdev=11845.09 00:34:32.319 clat (usec): min=21529, max=57107, avg=36058.07, stdev=4835.44 00:34:32.319 lat (usec): min=21543, max=57148, avg=36093.78, stdev=4835.44 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:32.319 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:34:32.319 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[43779], 99.50th=[44827], 99.90th=[56886], 99.95th=[56886], 00:34:32.319 | 99.99th=[56886] 00:34:32.319 bw ( KiB/s): min= 1408, max= 2048, per=4.13%, avg=1744.84, stdev=218.44, samples=19 00:34:32.319 iops : min= 352, max= 512, avg=436.21, stdev=54.61, samples=19 00:34:32.319 lat (msec) : 50=99.64%, 100=0.36% 00:34:32.319 cpu : usr=95.43%, sys=2.57%, ctx=90, majf=0, minf=54 00:34:32.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159701: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=439, BW=1758KiB/s (1801kB/s)(17.2MiB/10009msec) 00:34:32.319 slat (nsec): min=7761, max=88777, avg=35889.89, stdev=14541.08 00:34:32.319 clat (usec): min=13925, max=56214, avg=36115.54, stdev=4935.23 00:34:32.319 lat (usec): min=13937, max=56233, avg=36151.43, stdev=4934.03 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.319 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33817], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[44303], 99.50th=[45876], 99.90th=[56361], 99.95th=[56361], 00:34:32.319 | 99.99th=[56361] 00:34:32.319 bw ( KiB/s): min= 1424, max= 2032, per=4.13%, avg=1745.00, stdev=214.00, samples=19 00:34:32.319 iops : min= 356, max= 508, avg=436.21, stdev=53.49, samples=19 00:34:32.319 lat (msec) : 20=0.05%, 50=99.55%, 100=0.41% 00:34:32.319 cpu : usr=98.10%, sys=1.50%, ctx=13, majf=0, minf=43 00:34:32.319 IO depths : 1=0.9%, 2=7.2%, 4=24.9%, 8=55.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159702: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=439, BW=1757KiB/s (1800kB/s)(17.2MiB/10015msec) 00:34:32.319 slat (nsec): min=8430, max=71878, avg=24741.36, stdev=12407.69 00:34:32.319 clat (usec): min=31569, max=52218, avg=36206.90, stdev=4713.34 00:34:32.319 lat (usec): min=31592, max=52247, avg=36231.64, stdev=4709.17 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:32.319 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[44303], 99.50th=[44303], 99.90th=[52167], 99.95th=[52167], 00:34:32.319 | 99.99th=[52167] 00:34:32.319 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1753.60, stdev=220.14, samples=20 00:34:32.319 iops : min= 352, max= 512, avg=438.40, stdev=55.04, samples=20 00:34:32.319 lat (msec) : 50=99.64%, 100=0.36% 00:34:32.319 cpu : usr=96.98%, sys=1.86%, ctx=50, majf=0, minf=47 00:34:32.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename1: (groupid=0, jobs=1): err= 0: pid=1159703: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=440, BW=1763KiB/s (1805kB/s)(17.2MiB/10022msec) 00:34:32.319 slat (usec): min=8, max=122, avg=31.42, stdev=19.39 00:34:32.319 clat (usec): min=22635, max=55545, avg=36048.96, stdev=4658.77 00:34:32.319 lat (usec): min=22675, max=55577, avg=36080.38, stdev=4654.51 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.319 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[43779], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:34:32.319 | 99.99th=[55313] 00:34:32.319 bw ( KiB/s): min= 1536, max= 2048, per=4.17%, avg=1760.00, stdev=194.23, samples=20 00:34:32.319 iops : min= 384, max= 512, avg=440.00, stdev=48.56, samples=20 00:34:32.319 lat (msec) : 50=99.95%, 100=0.05% 00:34:32.319 cpu : usr=96.11%, sys=2.33%, ctx=161, majf=0, minf=42 00:34:32.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.319 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.319 filename2: (groupid=0, jobs=1): err= 0: pid=1159704: Mon Jul 22 12:29:38 2024 00:34:32.319 read: IOPS=442, BW=1770KiB/s (1813kB/s)(17.3MiB/10009msec) 00:34:32.319 slat (nsec): min=8075, max=86343, avg=25141.64, stdev=16533.69 00:34:32.319 clat (usec): min=14827, max=55523, avg=35977.40, stdev=5204.03 00:34:32.319 lat (usec): min=14840, max=55531, avg=36002.54, stdev=5205.31 00:34:32.319 clat percentiles (usec): 00:34:32.319 | 1.00th=[22938], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.319 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:34:32.319 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.319 | 99.00th=[46924], 99.50th=[48497], 99.90th=[52691], 99.95th=[55313], 00:34:32.319 | 99.99th=[55313] 00:34:32.320 bw ( KiB/s): min= 1424, max= 2064, per=4.16%, avg=1758.32, stdev=208.61, samples=19 00:34:32.320 iops : min= 356, max= 516, avg=439.58, stdev=52.15, samples=19 00:34:32.320 lat (msec) : 20=0.18%, 50=99.46%, 100=0.36% 00:34:32.320 cpu : usr=98.11%, sys=1.48%, ctx=13, majf=0, minf=39 00:34:32.320 IO depths : 1=0.6%, 2=3.7%, 4=12.5%, 8=68.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=91.7%, 8=5.3%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159705: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10006msec) 00:34:32.320 slat (nsec): min=8294, max=74268, avg=27205.72, stdev=11615.86 00:34:32.320 clat (usec): min=30814, max=48602, avg=36146.51, stdev=4628.89 00:34:32.320 lat (usec): min=30855, max=48638, avg=36173.71, stdev=4631.06 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:32.320 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:34:32.320 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.320 | 99.00th=[43779], 99.50th=[44827], 99.90th=[48497], 99.95th=[48497], 00:34:32.320 | 99.99th=[48497] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1751.58, stdev=225.98, samples=19 00:34:32.320 iops : min= 352, max= 512, avg=437.89, stdev=56.50, samples=19 00:34:32.320 lat (msec) : 50=100.00% 00:34:32.320 cpu : usr=97.96%, sys=1.63%, ctx=24, majf=0, minf=37 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159706: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10005msec) 00:34:32.320 slat (usec): min=14, max=117, avg=38.74, stdev=16.23 00:34:32.320 clat (usec): min=31300, max=45010, avg=35988.65, stdev=4590.56 00:34:32.320 lat (usec): min=31318, max=45043, avg=36027.39, stdev=4592.54 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:32.320 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.320 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:32.320 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:34:32.320 | 99.99th=[44827] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1751.58, stdev=213.56, samples=19 00:34:32.320 iops : min= 352, max= 512, avg=437.89, stdev=53.39, samples=19 00:34:32.320 lat (msec) : 50=100.00% 00:34:32.320 cpu : usr=98.22%, sys=1.34%, ctx=15, majf=0, minf=49 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159707: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10005msec) 00:34:32.320 slat (usec): min=9, max=109, avg=43.14, stdev=16.96 00:34:32.320 clat (usec): min=24308, max=45055, avg=36008.89, stdev=4666.18 00:34:32.320 lat (usec): min=24359, max=45089, avg=36052.03, stdev=4660.88 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:32.320 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.320 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:34:32.320 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:34:32.320 | 99.99th=[44827] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1751.58, stdev=213.56, samples=19 00:34:32.320 iops : min= 352, max= 512, avg=437.89, stdev=53.39, samples=19 00:34:32.320 lat (msec) : 50=100.00% 00:34:32.320 cpu : usr=98.05%, sys=1.53%, ctx=21, majf=0, minf=52 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159708: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10008msec) 00:34:32.320 slat (usec): min=8, max=104, avg=39.72, stdev=18.79 00:34:32.320 clat (usec): min=13485, max=65416, avg=36026.99, stdev=5129.67 00:34:32.320 lat (usec): min=13494, max=65459, avg=36066.71, stdev=5124.06 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:32.320 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:34:32.320 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:34:32.320 | 99.00th=[44303], 99.50th=[44303], 99.90th=[65274], 99.95th=[65274], 00:34:32.320 | 99.99th=[65274] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.13%, avg=1744.84, stdev=201.08, samples=19 00:34:32.320 iops : min= 352, max= 512, avg=436.21, stdev=50.27, samples=19 00:34:32.320 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:34:32.320 cpu : usr=95.18%, sys=2.61%, ctx=250, majf=0, minf=46 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159709: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=442, BW=1768KiB/s (1810kB/s)(17.3MiB/10027msec) 00:34:32.320 slat (nsec): min=8868, max=77585, avg=34727.55, stdev=11592.83 00:34:32.320 clat (usec): min=12008, max=51210, avg=35906.57, stdev=4896.21 00:34:32.320 lat (usec): min=12037, max=51245, avg=35941.30, stdev=4897.13 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:32.320 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:34:32.320 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:34:32.320 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:34:32.320 | 99.99th=[51119] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1766.40, stdev=201.75, samples=20 00:34:32.320 iops : min= 352, max= 512, avg=441.60, stdev=50.44, samples=20 00:34:32.320 lat (msec) : 20=0.68%, 50=99.28%, 100=0.05% 00:34:32.320 cpu : usr=96.74%, sys=2.23%, ctx=222, majf=0, minf=45 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159710: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=442, BW=1772KiB/s (1814kB/s)(17.4MiB/10041msec) 00:34:32.320 slat (usec): min=8, max=102, avg=14.45, stdev=10.86 00:34:32.320 clat (usec): min=11034, max=44555, avg=35979.41, stdev=5059.67 00:34:32.320 lat (usec): min=11043, max=44576, avg=35993.86, stdev=5056.59 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[25822], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:34:32.320 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:34:32.320 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.320 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:34:32.320 | 99.99th=[44303] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.20%, avg=1773.25, stdev=220.59, samples=20 00:34:32.320 iops : min= 352, max= 512, avg=443.20, stdev=55.21, samples=20 00:34:32.320 lat (msec) : 20=0.72%, 50=99.28% 00:34:32.320 cpu : usr=97.91%, sys=1.64%, ctx=21, majf=0, minf=67 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 filename2: (groupid=0, jobs=1): err= 0: pid=1159711: Mon Jul 22 12:29:38 2024 00:34:32.320 read: IOPS=439, BW=1757KiB/s (1800kB/s)(17.2MiB/10015msec) 00:34:32.320 slat (nsec): min=9036, max=71746, avg=30241.95, stdev=10819.80 00:34:32.320 clat (usec): min=31569, max=52269, avg=36152.14, stdev=4688.37 00:34:32.320 lat (usec): min=31624, max=52299, avg=36182.38, stdev=4687.24 00:34:32.320 clat percentiles (usec): 00:34:32.320 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:32.320 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33817], 00:34:32.320 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:34:32.320 | 99.00th=[44303], 99.50th=[44303], 99.90th=[52167], 99.95th=[52167], 00:34:32.320 | 99.99th=[52167] 00:34:32.320 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1753.60, stdev=220.14, samples=20 00:34:32.320 iops : min= 352, max= 512, avg=438.40, stdev=55.04, samples=20 00:34:32.320 lat (msec) : 50=99.64%, 100=0.36% 00:34:32.320 cpu : usr=97.07%, sys=1.87%, ctx=122, majf=0, minf=47 00:34:32.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:32.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.320 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:32.320 00:34:32.320 Run status group 0 (all jobs): 00:34:32.320 READ: bw=41.2MiB/s (43.2MB/s), 1757KiB/s-1787KiB/s (1800kB/s-1830kB/s), io=414MiB (434MB), run=10004-10041msec 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.320 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 bdev_null0 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 [2024-07-22 12:29:39.195283] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 bdev_null1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:32.321 { 00:34:32.321 "params": { 00:34:32.321 "name": "Nvme$subsystem", 00:34:32.321 "trtype": "$TEST_TRANSPORT", 00:34:32.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:32.321 "adrfam": "ipv4", 00:34:32.321 "trsvcid": "$NVMF_PORT", 00:34:32.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:32.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:32.321 "hdgst": ${hdgst:-false}, 00:34:32.321 "ddgst": ${ddgst:-false} 00:34:32.321 }, 00:34:32.321 "method": "bdev_nvme_attach_controller" 00:34:32.321 } 00:34:32.321 EOF 00:34:32.321 )") 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:32.321 { 00:34:32.321 "params": { 00:34:32.321 "name": "Nvme$subsystem", 00:34:32.321 "trtype": "$TEST_TRANSPORT", 00:34:32.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:32.321 "adrfam": "ipv4", 00:34:32.321 "trsvcid": "$NVMF_PORT", 00:34:32.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:32.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:32.321 "hdgst": ${hdgst:-false}, 00:34:32.321 "ddgst": ${ddgst:-false} 00:34:32.321 }, 00:34:32.321 "method": "bdev_nvme_attach_controller" 00:34:32.321 } 00:34:32.321 EOF 00:34:32.321 )") 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:32.321 12:29:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:32.321 "params": { 00:34:32.321 "name": "Nvme0", 00:34:32.321 "trtype": "tcp", 00:34:32.321 "traddr": "10.0.0.2", 00:34:32.321 "adrfam": "ipv4", 00:34:32.321 "trsvcid": "4420", 00:34:32.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:32.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:32.321 "hdgst": false, 00:34:32.321 "ddgst": false 00:34:32.321 }, 00:34:32.321 "method": "bdev_nvme_attach_controller" 00:34:32.321 },{ 00:34:32.321 "params": { 00:34:32.321 "name": "Nvme1", 00:34:32.322 "trtype": "tcp", 00:34:32.322 "traddr": "10.0.0.2", 00:34:32.322 "adrfam": "ipv4", 00:34:32.322 "trsvcid": "4420", 00:34:32.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:32.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:32.322 "hdgst": false, 00:34:32.322 "ddgst": false 00:34:32.322 }, 00:34:32.322 "method": "bdev_nvme_attach_controller" 00:34:32.322 }' 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:32.322 12:29:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.322 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:32.322 ... 00:34:32.322 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:32.322 ... 00:34:32.322 fio-3.35 00:34:32.322 Starting 4 threads 00:34:32.322 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.614 00:34:37.614 filename0: (groupid=0, jobs=1): err= 0: pid=1160970: Mon Jul 22 12:29:45 2024 00:34:37.614 read: IOPS=1733, BW=13.5MiB/s (14.2MB/s)(67.7MiB/5001msec) 00:34:37.614 slat (nsec): min=6463, max=62440, avg=11440.98, stdev=4701.96 00:34:37.614 clat (usec): min=878, max=8578, avg=4578.19, stdev=813.31 00:34:37.614 lat (usec): min=890, max=8586, avg=4589.63, stdev=813.01 00:34:37.614 clat percentiles (usec): 00:34:37.614 | 1.00th=[ 2802], 5.00th=[ 3458], 10.00th=[ 3785], 20.00th=[ 4080], 00:34:37.614 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:34:37.614 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5604], 95.00th=[ 6325], 00:34:37.614 | 99.00th=[ 7242], 99.50th=[ 7635], 99.90th=[ 7963], 99.95th=[ 8225], 00:34:37.614 | 99.99th=[ 8586] 00:34:37.614 bw ( KiB/s): min=13120, max=15088, per=24.62%, avg=13903.44, stdev=550.05, samples=9 00:34:37.615 iops : min= 1640, max= 1886, avg=1737.89, stdev=68.78, samples=9 00:34:37.615 lat (usec) : 1000=0.01% 00:34:37.615 lat (msec) : 2=0.16%, 4=16.44%, 10=83.39% 00:34:37.615 cpu : usr=92.48%, sys=7.00%, ctx=9, majf=0, minf=9 00:34:37.615 IO depths : 1=0.1%, 2=6.2%, 4=64.8%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 issued rwts: total=8670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.615 filename0: (groupid=0, jobs=1): err= 0: pid=1160971: Mon Jul 22 12:29:45 2024 00:34:37.615 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.8MiB/5002msec) 00:34:37.615 slat (nsec): min=6692, max=60573, avg=11782.90, stdev=4375.09 00:34:37.615 clat (usec): min=1340, max=8286, avg=4311.77, stdev=715.64 00:34:37.615 lat (usec): min=1353, max=8306, avg=4323.55, stdev=715.75 00:34:37.615 clat percentiles (usec): 00:34:37.615 | 1.00th=[ 2737], 5.00th=[ 3228], 10.00th=[ 3458], 20.00th=[ 3752], 00:34:37.615 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:34:37.615 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5538], 00:34:37.615 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8225], 00:34:37.615 | 99.99th=[ 8291] 00:34:37.615 bw ( KiB/s): min=14160, max=15536, per=26.06%, avg=14714.80, stdev=475.94, samples=10 00:34:37.615 iops : min= 1770, max= 1942, avg=1839.30, stdev=59.52, samples=10 00:34:37.615 lat (msec) : 2=0.15%, 4=28.72%, 10=71.13% 00:34:37.615 cpu : usr=92.02%, sys=7.48%, ctx=18, majf=0, minf=0 00:34:37.615 IO depths : 1=0.1%, 2=9.9%, 4=61.4%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 issued rwts: total=9196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.615 filename1: (groupid=0, jobs=1): err= 0: pid=1160972: Mon Jul 22 12:29:45 2024 00:34:37.615 read: IOPS=1698, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5003msec) 00:34:37.615 slat (nsec): min=6380, max=48503, avg=11343.84, stdev=4318.84 00:34:37.615 clat (usec): min=1431, max=10116, avg=4673.33, stdev=901.20 00:34:37.615 lat (usec): min=1439, max=10124, avg=4684.67, stdev=901.08 00:34:37.615 clat percentiles (usec): 00:34:37.615 | 1.00th=[ 3097], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4178], 00:34:37.615 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:34:37.615 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6783], 00:34:37.615 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[ 9634], 99.95th=[ 9765], 00:34:37.615 | 99.99th=[10159] 00:34:37.615 bw ( KiB/s): min=12656, max=14272, per=24.06%, avg=13587.20, stdev=523.85, samples=10 00:34:37.615 iops : min= 1582, max= 1784, avg=1698.40, stdev=65.48, samples=10 00:34:37.615 lat (msec) : 2=0.02%, 4=13.25%, 10=86.70%, 20=0.02% 00:34:37.615 cpu : usr=92.32%, sys=7.18%, ctx=15, majf=0, minf=2 00:34:37.615 IO depths : 1=0.8%, 2=3.8%, 4=69.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 issued rwts: total=8497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.615 filename1: (groupid=0, jobs=1): err= 0: pid=1160973: Mon Jul 22 12:29:45 2024 00:34:37.615 read: IOPS=1790, BW=14.0MiB/s (14.7MB/s)(69.9MiB/5001msec) 00:34:37.615 slat (nsec): min=5037, max=58946, avg=11855.68, stdev=4432.24 00:34:37.615 clat (usec): min=1133, max=8536, avg=4428.86, stdev=749.66 00:34:37.615 lat (usec): min=1147, max=8551, avg=4440.71, stdev=749.62 00:34:37.615 clat percentiles (usec): 00:34:37.615 | 1.00th=[ 2638], 5.00th=[ 3326], 10.00th=[ 3621], 20.00th=[ 3949], 00:34:37.615 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:34:37.615 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5342], 95.00th=[ 5866], 00:34:37.615 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7701], 00:34:37.615 | 99.99th=[ 8586] 00:34:37.615 bw ( KiB/s): min=13552, max=15072, per=25.38%, avg=14330.67, stdev=431.63, samples=9 00:34:37.615 iops : min= 1694, max= 1884, avg=1791.33, stdev=53.95, samples=9 00:34:37.615 lat (msec) : 2=0.17%, 4=22.43%, 10=77.40% 00:34:37.615 cpu : usr=92.20%, sys=7.24%, ctx=10, majf=0, minf=9 00:34:37.615 IO depths : 1=0.1%, 2=9.9%, 4=61.8%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.615 issued rwts: total=8953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.615 00:34:37.615 Run status group 0 (all jobs): 00:34:37.615 READ: bw=55.1MiB/s (57.8MB/s), 13.3MiB/s-14.4MiB/s (13.9MB/s-15.1MB/s), io=276MiB (289MB), run=5001-5003msec 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.872 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 00:34:37.873 real 0m24.465s 00:34:37.873 user 4m29.648s 00:34:37.873 sys 0m8.655s 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 ************************************ 00:34:37.873 END TEST fio_dif_rand_params 00:34:37.873 ************************************ 00:34:37.873 12:29:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:37.873 12:29:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:37.873 12:29:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:37.873 12:29:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 ************************************ 00:34:37.873 START TEST fio_dif_digest 00:34:37.873 ************************************ 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 bdev_null0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.873 [2024-07-22 12:29:45.704019] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:37.873 { 00:34:37.873 "params": { 00:34:37.873 "name": "Nvme$subsystem", 00:34:37.873 "trtype": "$TEST_TRANSPORT", 00:34:37.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.873 "adrfam": "ipv4", 00:34:37.873 "trsvcid": "$NVMF_PORT", 00:34:37.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.873 "hdgst": ${hdgst:-false}, 00:34:37.873 "ddgst": ${ddgst:-false} 00:34:37.873 }, 00:34:37.873 "method": "bdev_nvme_attach_controller" 00:34:37.873 } 00:34:37.873 EOF 00:34:37.873 )") 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:37.873 "params": { 00:34:37.873 "name": "Nvme0", 00:34:37.873 "trtype": "tcp", 00:34:37.873 "traddr": "10.0.0.2", 00:34:37.873 "adrfam": "ipv4", 00:34:37.873 "trsvcid": "4420", 00:34:37.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.873 "hdgst": true, 00:34:37.873 "ddgst": true 00:34:37.873 }, 00:34:37.873 "method": "bdev_nvme_attach_controller" 00:34:37.873 }' 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.873 12:29:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.130 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:38.130 ... 00:34:38.130 fio-3.35 00:34:38.130 Starting 3 threads 00:34:38.130 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.321 00:34:50.321 filename0: (groupid=0, jobs=1): err= 0: pid=1161841: Mon Jul 22 12:29:56 2024 00:34:50.321 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10046msec) 00:34:50.321 slat (nsec): min=4621, max=39267, avg=13988.27, stdev=3495.93 00:34:50.321 clat (usec): min=9415, max=55768, avg=14947.54, stdev=1707.40 00:34:50.321 lat (usec): min=9428, max=55781, avg=14961.53, stdev=1707.32 00:34:50.321 clat percentiles (usec): 00:34:50.321 | 1.00th=[11731], 5.00th=[13042], 10.00th=[13435], 20.00th=[13960], 00:34:50.321 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:34:50.321 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:34:50.321 | 99.00th=[17695], 99.50th=[18220], 99.90th=[22938], 99.95th=[47973], 00:34:50.321 | 99.99th=[55837] 00:34:50.321 bw ( KiB/s): min=24832, max=27136, per=33.53%, avg=25702.40, stdev=629.27, samples=20 00:34:50.321 iops : min= 194, max= 212, avg=200.80, stdev= 4.92, samples=20 00:34:50.321 lat (msec) : 10=0.15%, 20=99.60%, 50=0.20%, 100=0.05% 00:34:50.321 cpu : usr=90.41%, sys=9.09%, ctx=37, majf=0, minf=126 00:34:50.321 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.321 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.321 filename0: (groupid=0, jobs=1): err= 0: pid=1161842: Mon Jul 22 12:29:56 2024 00:34:50.321 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(243MiB/10047msec) 00:34:50.321 slat (nsec): min=4694, max=43811, avg=13826.38, stdev=2987.86 00:34:50.321 clat (usec): min=11682, max=59077, avg=15496.34, stdev=2843.45 00:34:50.321 lat (usec): min=11695, max=59113, avg=15510.17, stdev=2843.60 00:34:50.321 clat percentiles (usec): 00:34:50.321 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:34:50.321 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:34:50.321 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:34:50.321 | 99.00th=[18482], 99.50th=[22676], 99.90th=[57934], 99.95th=[58983], 00:34:50.321 | 99.99th=[58983] 00:34:50.321 bw ( KiB/s): min=22016, max=25856, per=32.35%, avg=24796.05, stdev=1067.31, samples=20 00:34:50.321 iops : min= 172, max= 202, avg=193.70, stdev= 8.34, samples=20 00:34:50.321 lat (msec) : 20=99.43%, 50=0.21%, 100=0.36% 00:34:50.321 cpu : usr=90.06%, sys=9.46%, ctx=29, majf=0, minf=134 00:34:50.321 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.321 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.321 filename0: (groupid=0, jobs=1): err= 0: pid=1161843: Mon Jul 22 12:29:56 2024 00:34:50.321 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10047msec) 00:34:50.321 slat (usec): min=4, max=103, avg=13.67, stdev= 3.88 00:34:50.321 clat (usec): min=8996, max=51412, avg=14556.76, stdev=1662.67 00:34:50.321 lat (usec): min=9009, max=51425, avg=14570.42, stdev=1662.75 00:34:50.321 clat percentiles (usec): 00:34:50.321 | 1.00th=[11469], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:34:50.321 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:34:50.321 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:34:50.321 | 99.00th=[17433], 99.50th=[17695], 99.90th=[21890], 99.95th=[47973], 00:34:50.321 | 99.99th=[51643] 00:34:50.321 bw ( KiB/s): min=25344, max=27648, per=34.45%, avg=26406.40, stdev=848.14, samples=20 00:34:50.321 iops : min= 198, max= 216, avg=206.30, stdev= 6.63, samples=20 00:34:50.321 lat (msec) : 10=0.19%, 20=99.56%, 50=0.19%, 100=0.05% 00:34:50.321 cpu : usr=89.46%, sys=10.05%, ctx=28, majf=0, minf=152 00:34:50.321 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.321 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.321 00:34:50.321 Run status group 0 (all jobs): 00:34:50.321 READ: bw=74.8MiB/s (78.5MB/s), 24.1MiB/s-25.7MiB/s (25.3MB/s-26.9MB/s), io=752MiB (789MB), run=10046-10047msec 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.321 00:34:50.321 real 0m11.149s 00:34:50.321 user 0m28.290s 00:34:50.321 sys 0m3.140s 00:34:50.321 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:50.322 12:29:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:50.322 ************************************ 00:34:50.322 END TEST fio_dif_digest 00:34:50.322 ************************************ 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:50.322 12:29:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:50.322 12:29:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.322 rmmod nvme_tcp 00:34:50.322 rmmod nvme_fabrics 00:34:50.322 rmmod nvme_keyring 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1155798 ']' 00:34:50.322 12:29:56 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1155798 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1155798 ']' 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1155798 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1155798 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1155798' 00:34:50.322 killing process with pid 1155798 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1155798 00:34:50.322 12:29:56 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1155798 00:34:50.322 12:29:57 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:50.322 12:29:57 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:50.322 Waiting for block devices as requested 00:34:50.580 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:50.580 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:50.838 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:50.838 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:50.838 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:50.838 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:51.097 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:51.097 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:51.097 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:51.097 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:51.356 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:51.356 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:51.356 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:51.356 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:51.615 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:51.615 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:51.615 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:51.875 12:29:59 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:51.875 12:29:59 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:51.875 12:29:59 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:51.875 12:29:59 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:51.875 12:29:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.875 12:29:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.875 12:29:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.779 12:30:01 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:53.779 00:34:53.779 real 1m6.804s 00:34:53.779 user 6m25.636s 00:34:53.779 sys 0m20.837s 00:34:53.779 12:30:01 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:53.779 12:30:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:53.779 ************************************ 00:34:53.779 END TEST nvmf_dif 00:34:53.779 ************************************ 00:34:53.779 12:30:01 -- common/autotest_common.sh@1142 -- # return 0 00:34:53.779 12:30:01 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:53.779 12:30:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:53.779 12:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:53.779 12:30:01 -- common/autotest_common.sh@10 -- # set +x 00:34:53.779 ************************************ 00:34:53.779 START TEST nvmf_abort_qd_sizes 00:34:53.779 ************************************ 00:34:53.779 12:30:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:54.038 * Looking for test storage... 00:34:54.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:34:54.038 12:30:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:55.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:55.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:55.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:55.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:55.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:34:55.934 00:34:55.934 --- 10.0.0.2 ping statistics --- 00:34:55.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.934 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:34:55.934 00:34:55.934 --- 10.0.0.1 ping statistics --- 00:34:55.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.934 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:55.934 12:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.302 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:57.302 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:57.302 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:58.233 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1166752 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1166752 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1166752 ']' 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:58.233 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:58.233 [2024-07-22 12:30:06.138947] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:34:58.233 [2024-07-22 12:30:06.139032] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.492 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.492 [2024-07-22 12:30:06.176799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:58.492 [2024-07-22 12:30:06.205580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:58.492 [2024-07-22 12:30:06.297359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.492 [2024-07-22 12:30:06.297437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.492 [2024-07-22 12:30:06.297451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:58.492 [2024-07-22 12:30:06.297467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:58.492 [2024-07-22 12:30:06.297477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.492 [2024-07-22 12:30:06.297561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.492 [2024-07-22 12:30:06.297631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.492 [2024-07-22 12:30:06.297689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:58.492 [2024-07-22 12:30:06.297692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.492 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:58.492 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:34:58.492 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:58.492 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:58.492 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.749 12:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:58.749 ************************************ 00:34:58.749 START TEST spdk_target_abort 00:34:58.749 ************************************ 00:34:58.749 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:34:58.749 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:58.749 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:34:58.749 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.749 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.063 spdk_targetn1 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.063 [2024-07-22 12:30:09.312047] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.063 [2024-07-22 12:30:09.344248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:02.063 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.063 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.349 Initializing NVMe Controllers 00:35:05.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:05.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:05.349 Initialization complete. Launching workers. 00:35:05.349 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8310, failed: 0 00:35:05.349 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1356, failed to submit 6954 00:35:05.349 success 728, unsuccess 628, failed 0 00:35:05.349 12:30:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.349 12:30:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.349 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.664 Initializing NVMe Controllers 00:35:08.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.664 Initialization complete. Launching workers. 00:35:08.664 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8737, failed: 0 00:35:08.664 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7489 00:35:08.664 success 326, unsuccess 922, failed 0 00:35:08.664 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.664 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.665 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.189 Initializing NVMe Controllers 00:35:11.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:11.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:11.189 Initialization complete. Launching workers. 00:35:11.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31251, failed: 0 00:35:11.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2688, failed to submit 28563 00:35:11.189 success 548, unsuccess 2140, failed 0 00:35:11.189 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:11.189 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.189 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:11.447 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.447 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:11.447 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.447 12:30:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1166752 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1166752 ']' 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1166752 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166752 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166752' 00:35:12.816 killing process with pid 1166752 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1166752 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1166752 00:35:12.816 00:35:12.816 real 0m14.236s 00:35:12.816 user 0m53.072s 00:35:12.816 sys 0m2.877s 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:12.816 12:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.816 ************************************ 00:35:12.816 END TEST spdk_target_abort 00:35:12.816 ************************************ 00:35:12.816 12:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:12.816 12:30:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:12.816 12:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:12.816 12:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:12.816 12:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:13.074 ************************************ 00:35:13.074 START TEST kernel_target_abort 00:35:13.074 ************************************ 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:13.074 12:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:14.003 Waiting for block devices as requested 00:35:14.003 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:14.261 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.261 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:14.261 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:14.517 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:14.517 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:14.517 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:14.518 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:14.775 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:14.775 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.775 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:14.775 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:15.033 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:15.033 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:15.033 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:15.033 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:15.033 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:15.291 No valid GPT data, bailing 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:15.291 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:15.548 00:35:15.548 Discovery Log Number of Records 2, Generation counter 2 00:35:15.548 =====Discovery Log Entry 0====== 00:35:15.548 trtype: tcp 00:35:15.548 adrfam: ipv4 00:35:15.548 subtype: current discovery subsystem 00:35:15.548 treq: not specified, sq flow control disable supported 00:35:15.548 portid: 1 00:35:15.548 trsvcid: 4420 00:35:15.548 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:15.548 traddr: 10.0.0.1 00:35:15.548 eflags: none 00:35:15.548 sectype: none 00:35:15.548 =====Discovery Log Entry 1====== 00:35:15.548 trtype: tcp 00:35:15.548 adrfam: ipv4 00:35:15.548 subtype: nvme subsystem 00:35:15.548 treq: not specified, sq flow control disable supported 00:35:15.548 portid: 1 00:35:15.548 trsvcid: 4420 00:35:15.548 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:15.548 traddr: 10.0.0.1 00:35:15.548 eflags: none 00:35:15.548 sectype: none 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.548 12:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.548 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.824 Initializing NVMe Controllers 00:35:18.824 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.824 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.824 Initialization complete. Launching workers. 00:35:18.824 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33989, failed: 0 00:35:18.824 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33989, failed to submit 0 00:35:18.824 success 0, unsuccess 33989, failed 0 00:35:18.824 12:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.824 12:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.824 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.108 Initializing NVMe Controllers 00:35:22.108 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.108 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:22.108 Initialization complete. Launching workers. 00:35:22.108 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65914, failed: 0 00:35:22.108 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16622, failed to submit 49292 00:35:22.108 success 0, unsuccess 16622, failed 0 00:35:22.108 12:30:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:22.108 12:30:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:22.108 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.390 Initializing NVMe Controllers 00:35:25.390 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:25.390 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:25.390 Initialization complete. Launching workers. 00:35:25.390 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64769, failed: 0 00:35:25.390 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16178, failed to submit 48591 00:35:25.390 success 0, unsuccess 16178, failed 0 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:25.390 12:30:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.956 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:25.956 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:25.956 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:26.888 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:27.146 00:35:27.146 real 0m14.154s 00:35:27.146 user 0m5.298s 00:35:27.146 sys 0m3.382s 00:35:27.146 12:30:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:27.146 12:30:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.146 ************************************ 00:35:27.146 END TEST kernel_target_abort 00:35:27.146 ************************************ 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:27.146 rmmod nvme_tcp 00:35:27.146 rmmod nvme_fabrics 00:35:27.146 rmmod nvme_keyring 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1166752 ']' 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1166752 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1166752 ']' 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1166752 00:35:27.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1166752) - No such process 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1166752 is not found' 00:35:27.146 Process with pid 1166752 is not found 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:27.146 12:30:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:28.516 Waiting for block devices as requested 00:35:28.516 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:28.516 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:28.516 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:28.773 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:28.773 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:28.773 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:28.773 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:28.773 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.030 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.030 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.030 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.294 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.294 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.294 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.294 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:29.294 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.551 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.551 12:30:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.079 12:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:32.079 00:35:32.079 real 0m37.751s 00:35:32.079 user 1m0.438s 00:35:32.079 sys 0m9.630s 00:35:32.079 12:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:32.079 12:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:32.079 ************************************ 00:35:32.079 END TEST nvmf_abort_qd_sizes 00:35:32.079 ************************************ 00:35:32.079 12:30:39 -- common/autotest_common.sh@1142 -- # return 0 00:35:32.079 12:30:39 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:32.079 12:30:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:32.079 12:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:32.079 12:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:32.079 ************************************ 00:35:32.079 START TEST keyring_file 00:35:32.079 ************************************ 00:35:32.079 12:30:39 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:32.079 * Looking for test storage... 00:35:32.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:32.079 12:30:39 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:32.079 12:30:39 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.079 12:30:39 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.079 12:30:39 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.079 12:30:39 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.079 12:30:39 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.079 12:30:39 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.079 12:30:39 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.079 12:30:39 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:32.079 12:30:39 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:32.079 12:30:39 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:32.079 12:30:39 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:32.079 12:30:39 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:32.079 12:30:39 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cikAQzvX8t 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cikAQzvX8t 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cikAQzvX8t 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cikAQzvX8t 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AEUqLyktb8 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:32.080 12:30:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AEUqLyktb8 00:35:32.080 12:30:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AEUqLyktb8 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AEUqLyktb8 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@30 -- # tgtpid=1173015 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:32.080 12:30:39 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1173015 00:35:32.080 12:30:39 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1173015 ']' 00:35:32.080 12:30:39 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.080 12:30:39 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:32.080 12:30:39 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.080 12:30:39 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:32.080 12:30:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.080 [2024-07-22 12:30:39.690661] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:35:32.080 [2024-07-22 12:30:39.690753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173015 ] 00:35:32.080 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.080 [2024-07-22 12:30:39.726366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:32.080 [2024-07-22 12:30:39.753274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.080 [2024-07-22 12:30:39.836944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.338 12:30:40 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:32.338 12:30:40 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:32.338 12:30:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:32.338 12:30:40 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.338 12:30:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.338 [2024-07-22 12:30:40.088137] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.338 null0 00:35:32.338 [2024-07-22 12:30:40.120205] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:32.338 [2024-07-22 12:30:40.120691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:32.338 [2024-07-22 12:30:40.128222] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:32.338 12:30:40 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.339 12:30:40 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.339 [2024-07-22 12:30:40.140242] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:32.339 request: 00:35:32.339 { 00:35:32.339 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.339 "secure_channel": false, 00:35:32.339 "listen_address": { 00:35:32.339 "trtype": "tcp", 00:35:32.339 "traddr": "127.0.0.1", 00:35:32.339 "trsvcid": "4420" 00:35:32.339 }, 00:35:32.339 "method": "nvmf_subsystem_add_listener", 00:35:32.339 "req_id": 1 00:35:32.339 } 00:35:32.339 Got JSON-RPC error response 00:35:32.339 response: 00:35:32.339 { 00:35:32.339 "code": -32602, 00:35:32.339 "message": "Invalid parameters" 00:35:32.339 } 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:32.339 12:30:40 keyring_file -- keyring/file.sh@46 -- # bperfpid=1173021 00:35:32.339 12:30:40 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1173021 /var/tmp/bperf.sock 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1173021 ']' 00:35:32.339 12:30:40 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:32.339 12:30:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:32.339 [2024-07-22 12:30:40.191074] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:35:32.339 [2024-07-22 12:30:40.191142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173021 ] 00:35:32.339 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.339 [2024-07-22 12:30:40.221563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:32.339 [2024-07-22 12:30:40.249146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.597 [2024-07-22 12:30:40.335233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.597 12:30:40 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:32.597 12:30:40 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:32.597 12:30:40 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:32.597 12:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:32.855 12:30:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AEUqLyktb8 00:35:32.855 12:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AEUqLyktb8 00:35:33.113 12:30:40 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:33.113 12:30:40 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:33.113 12:30:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.113 12:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.113 12:30:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:33.370 12:30:41 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.cikAQzvX8t == \/\t\m\p\/\t\m\p\.\c\i\k\A\Q\z\v\X\8\t ]] 00:35:33.370 12:30:41 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:33.370 12:30:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:33.370 12:30:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.370 12:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.370 12:30:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:33.629 12:30:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AEUqLyktb8 == \/\t\m\p\/\t\m\p\.\A\E\U\q\L\y\k\t\b\8 ]] 00:35:33.629 12:30:41 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:33.629 12:30:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:33.629 12:30:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:33.629 12:30:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.629 12:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.629 12:30:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:33.888 12:30:41 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:33.888 12:30:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:33.888 12:30:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:33.888 12:30:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:33.888 12:30:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.888 12:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.888 12:30:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.146 12:30:41 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:34.146 12:30:41 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:34.146 12:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:34.404 [2024-07-22 12:30:42.208570] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:34.404 nvme0n1 00:35:34.404 12:30:42 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:34.404 12:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:34.404 12:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.404 12:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.404 12:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.404 12:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.662 12:30:42 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:34.662 12:30:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:34.662 12:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:34.662 12:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.662 12:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.662 12:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.662 12:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.930 12:30:42 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:34.930 12:30:42 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:35.186 Running I/O for 1 seconds... 00:35:36.118 00:35:36.118 Latency(us) 00:35:36.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.118 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:36.118 nvme0n1 : 1.02 5011.67 19.58 0.00 0.00 25284.15 4029.25 28738.75 00:35:36.118 =================================================================================================================== 00:35:36.118 Total : 5011.67 19.58 0.00 0.00 25284.15 4029.25 28738.75 00:35:36.118 0 00:35:36.118 12:30:43 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:36.118 12:30:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:36.382 12:30:44 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:36.382 12:30:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.382 12:30:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.382 12:30:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.382 12:30:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.382 12:30:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.644 12:30:44 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:36.644 12:30:44 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:36.644 12:30:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.644 12:30:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.644 12:30:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.644 12:30:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.644 12:30:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.902 12:30:44 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:36.902 12:30:44 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:36.902 12:30:44 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:36.902 12:30:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:37.159 [2024-07-22 12:30:44.925139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:37.159 [2024-07-22 12:30:44.925250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235f4b0 (107): Transport endpoint is not connected 00:35:37.159 [2024-07-22 12:30:44.926238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235f4b0 (9): Bad file descriptor 00:35:37.159 [2024-07-22 12:30:44.927236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:37.159 [2024-07-22 12:30:44.927260] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:37.159 [2024-07-22 12:30:44.927276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:37.160 request: 00:35:37.160 { 00:35:37.160 "name": "nvme0", 00:35:37.160 "trtype": "tcp", 00:35:37.160 "traddr": "127.0.0.1", 00:35:37.160 "adrfam": "ipv4", 00:35:37.160 "trsvcid": "4420", 00:35:37.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.160 "prchk_reftag": false, 00:35:37.160 "prchk_guard": false, 00:35:37.160 "hdgst": false, 00:35:37.160 "ddgst": false, 00:35:37.160 "psk": "key1", 00:35:37.160 "method": "bdev_nvme_attach_controller", 00:35:37.160 "req_id": 1 00:35:37.160 } 00:35:37.160 Got JSON-RPC error response 00:35:37.160 response: 00:35:37.160 { 00:35:37.160 "code": -5, 00:35:37.160 "message": "Input/output error" 00:35:37.160 } 00:35:37.160 12:30:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:37.160 12:30:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:37.160 12:30:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:37.160 12:30:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:37.160 12:30:44 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:37.160 12:30:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:37.160 12:30:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.160 12:30:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.160 12:30:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.160 12:30:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:37.416 12:30:45 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:37.416 12:30:45 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:37.416 12:30:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:37.416 12:30:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.416 12:30:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.416 12:30:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.416 12:30:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:37.677 12:30:45 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:37.677 12:30:45 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:37.677 12:30:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:38.007 12:30:45 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:38.007 12:30:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:38.264 12:30:45 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:38.264 12:30:45 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:38.264 12:30:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.522 12:30:46 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:38.522 12:30:46 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.cikAQzvX8t 00:35:38.522 12:30:46 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:38.522 12:30:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:38.522 12:30:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:38.522 [2024-07-22 12:30:46.443304] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cikAQzvX8t': 0100660 00:35:38.522 [2024-07-22 12:30:46.443346] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:38.522 request: 00:35:38.522 { 00:35:38.522 "name": "key0", 00:35:38.522 "path": "/tmp/tmp.cikAQzvX8t", 00:35:38.522 "method": "keyring_file_add_key", 00:35:38.522 "req_id": 1 00:35:38.522 } 00:35:38.522 Got JSON-RPC error response 00:35:38.522 response: 00:35:38.522 { 00:35:38.522 "code": -1, 00:35:38.522 "message": "Operation not permitted" 00:35:38.522 } 00:35:38.779 12:30:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:38.779 12:30:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:38.779 12:30:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:38.779 12:30:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:38.779 12:30:46 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.cikAQzvX8t 00:35:38.779 12:30:46 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:38.779 12:30:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cikAQzvX8t 00:35:38.779 12:30:46 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.cikAQzvX8t 00:35:38.779 12:30:46 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:39.035 12:30:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.035 12:30:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.035 12:30:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.035 12:30:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.035 12:30:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.329 12:30:46 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:39.329 12:30:46 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:39.329 12:30:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.329 12:30:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.329 [2024-07-22 12:30:47.193375] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cikAQzvX8t': No such file or directory 00:35:39.329 [2024-07-22 12:30:47.193416] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:39.329 [2024-07-22 12:30:47.193448] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:39.329 [2024-07-22 12:30:47.193461] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:39.329 [2024-07-22 12:30:47.193473] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:39.329 request: 00:35:39.329 { 00:35:39.329 "name": "nvme0", 00:35:39.329 "trtype": "tcp", 00:35:39.329 "traddr": "127.0.0.1", 00:35:39.329 "adrfam": "ipv4", 00:35:39.329 "trsvcid": "4420", 00:35:39.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.329 "prchk_reftag": false, 00:35:39.329 "prchk_guard": false, 00:35:39.329 "hdgst": false, 00:35:39.329 "ddgst": false, 00:35:39.329 "psk": "key0", 00:35:39.329 "method": "bdev_nvme_attach_controller", 00:35:39.329 "req_id": 1 00:35:39.329 } 00:35:39.329 Got JSON-RPC error response 00:35:39.329 response: 00:35:39.329 { 00:35:39.329 "code": -19, 00:35:39.329 "message": "No such device" 00:35:39.329 } 00:35:39.329 12:30:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:39.329 12:30:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:39.329 12:30:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:39.329 12:30:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:39.329 12:30:47 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:39.329 12:30:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:39.585 12:30:47 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jFj1ChaSiW 00:35:39.585 12:30:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:39.585 12:30:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:39.585 12:30:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:39.585 12:30:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:39.585 12:30:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:39.585 12:30:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:39.585 12:30:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:39.842 12:30:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jFj1ChaSiW 00:35:39.842 12:30:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jFj1ChaSiW 00:35:39.842 12:30:47 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.jFj1ChaSiW 00:35:39.842 12:30:47 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jFj1ChaSiW 00:35:39.842 12:30:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jFj1ChaSiW 00:35:40.099 12:30:47 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.099 12:30:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.356 nvme0n1 00:35:40.356 12:30:48 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:40.356 12:30:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:40.356 12:30:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.356 12:30:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.356 12:30:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.356 12:30:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:40.614 12:30:48 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:40.614 12:30:48 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:40.614 12:30:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:40.870 12:30:48 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:40.870 12:30:48 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:40.870 12:30:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.870 12:30:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.870 12:30:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.127 12:30:48 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:41.127 12:30:48 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:41.127 12:30:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.127 12:30:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.127 12:30:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.127 12:30:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.127 12:30:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.385 12:30:49 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:41.385 12:30:49 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:41.385 12:30:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:41.642 12:30:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:41.642 12:30:49 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:41.642 12:30:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.900 12:30:49 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:41.900 12:30:49 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jFj1ChaSiW 00:35:41.900 12:30:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jFj1ChaSiW 00:35:42.157 12:30:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AEUqLyktb8 00:35:42.158 12:30:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AEUqLyktb8 00:35:42.158 12:30:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.158 12:30:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.723 nvme0n1 00:35:42.723 12:30:50 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:42.723 12:30:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:42.981 12:30:50 keyring_file -- keyring/file.sh@112 -- # config='{ 00:35:42.981 "subsystems": [ 00:35:42.981 { 00:35:42.981 "subsystem": "keyring", 00:35:42.981 "config": [ 00:35:42.981 { 00:35:42.981 "method": "keyring_file_add_key", 00:35:42.981 "params": { 00:35:42.981 "name": "key0", 00:35:42.981 "path": "/tmp/tmp.jFj1ChaSiW" 00:35:42.981 } 00:35:42.981 }, 00:35:42.981 { 00:35:42.981 "method": "keyring_file_add_key", 00:35:42.981 "params": { 00:35:42.981 "name": "key1", 00:35:42.981 "path": "/tmp/tmp.AEUqLyktb8" 00:35:42.981 } 00:35:42.982 } 00:35:42.982 ] 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "subsystem": "iobuf", 00:35:42.982 "config": [ 00:35:42.982 { 00:35:42.982 "method": "iobuf_set_options", 00:35:42.982 "params": { 00:35:42.982 "small_pool_count": 8192, 00:35:42.982 "large_pool_count": 1024, 00:35:42.982 "small_bufsize": 8192, 00:35:42.982 "large_bufsize": 135168 00:35:42.982 } 00:35:42.982 } 00:35:42.982 ] 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "subsystem": "sock", 00:35:42.982 "config": [ 00:35:42.982 { 00:35:42.982 "method": "sock_set_default_impl", 00:35:42.982 "params": { 00:35:42.982 "impl_name": "posix" 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "sock_impl_set_options", 00:35:42.982 "params": { 00:35:42.982 "impl_name": "ssl", 00:35:42.982 "recv_buf_size": 4096, 00:35:42.982 "send_buf_size": 4096, 00:35:42.982 "enable_recv_pipe": true, 00:35:42.982 "enable_quickack": false, 00:35:42.982 "enable_placement_id": 0, 00:35:42.982 "enable_zerocopy_send_server": true, 00:35:42.982 "enable_zerocopy_send_client": false, 00:35:42.982 "zerocopy_threshold": 0, 00:35:42.982 "tls_version": 0, 00:35:42.982 "enable_ktls": false 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "sock_impl_set_options", 00:35:42.982 "params": { 00:35:42.982 "impl_name": "posix", 00:35:42.982 "recv_buf_size": 2097152, 00:35:42.982 "send_buf_size": 2097152, 00:35:42.982 "enable_recv_pipe": true, 00:35:42.982 "enable_quickack": false, 00:35:42.982 "enable_placement_id": 0, 00:35:42.982 "enable_zerocopy_send_server": true, 00:35:42.982 "enable_zerocopy_send_client": false, 00:35:42.982 "zerocopy_threshold": 0, 00:35:42.982 "tls_version": 0, 00:35:42.982 "enable_ktls": false 00:35:42.982 } 00:35:42.982 } 00:35:42.982 ] 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "subsystem": "vmd", 00:35:42.982 "config": [] 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "subsystem": "accel", 00:35:42.982 "config": [ 00:35:42.982 { 00:35:42.982 "method": "accel_set_options", 00:35:42.982 "params": { 00:35:42.982 "small_cache_size": 128, 00:35:42.982 "large_cache_size": 16, 00:35:42.982 "task_count": 2048, 00:35:42.982 "sequence_count": 2048, 00:35:42.982 "buf_count": 2048 00:35:42.982 } 00:35:42.982 } 00:35:42.982 ] 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "subsystem": "bdev", 00:35:42.982 "config": [ 00:35:42.982 { 00:35:42.982 "method": "bdev_set_options", 00:35:42.982 "params": { 00:35:42.982 "bdev_io_pool_size": 65535, 00:35:42.982 "bdev_io_cache_size": 256, 00:35:42.982 "bdev_auto_examine": true, 00:35:42.982 "iobuf_small_cache_size": 128, 00:35:42.982 "iobuf_large_cache_size": 16 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "bdev_raid_set_options", 00:35:42.982 "params": { 00:35:42.982 "process_window_size_kb": 1024, 00:35:42.982 "process_max_bandwidth_mb_sec": 0 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "bdev_iscsi_set_options", 00:35:42.982 "params": { 00:35:42.982 "timeout_sec": 30 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "bdev_nvme_set_options", 00:35:42.982 "params": { 00:35:42.982 "action_on_timeout": "none", 00:35:42.982 "timeout_us": 0, 00:35:42.982 "timeout_admin_us": 0, 00:35:42.982 "keep_alive_timeout_ms": 10000, 00:35:42.982 "arbitration_burst": 0, 00:35:42.982 "low_priority_weight": 0, 00:35:42.982 "medium_priority_weight": 0, 00:35:42.982 "high_priority_weight": 0, 00:35:42.982 "nvme_adminq_poll_period_us": 10000, 00:35:42.982 "nvme_ioq_poll_period_us": 0, 00:35:42.982 "io_queue_requests": 512, 00:35:42.982 "delay_cmd_submit": true, 00:35:42.982 "transport_retry_count": 4, 00:35:42.982 "bdev_retry_count": 3, 00:35:42.982 "transport_ack_timeout": 0, 00:35:42.982 "ctrlr_loss_timeout_sec": 0, 00:35:42.982 "reconnect_delay_sec": 0, 00:35:42.982 "fast_io_fail_timeout_sec": 0, 00:35:42.982 "disable_auto_failback": false, 00:35:42.982 "generate_uuids": false, 00:35:42.982 "transport_tos": 0, 00:35:42.982 "nvme_error_stat": false, 00:35:42.982 "rdma_srq_size": 0, 00:35:42.982 "io_path_stat": false, 00:35:42.982 "allow_accel_sequence": false, 00:35:42.982 "rdma_max_cq_size": 0, 00:35:42.982 "rdma_cm_event_timeout_ms": 0, 00:35:42.982 "dhchap_digests": [ 00:35:42.982 "sha256", 00:35:42.982 "sha384", 00:35:42.982 "sha512" 00:35:42.982 ], 00:35:42.982 "dhchap_dhgroups": [ 00:35:42.982 "null", 00:35:42.982 "ffdhe2048", 00:35:42.982 "ffdhe3072", 00:35:42.982 "ffdhe4096", 00:35:42.982 "ffdhe6144", 00:35:42.982 "ffdhe8192" 00:35:42.982 ] 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "bdev_nvme_attach_controller", 00:35:42.982 "params": { 00:35:42.982 "name": "nvme0", 00:35:42.982 "trtype": "TCP", 00:35:42.982 "adrfam": "IPv4", 00:35:42.982 "traddr": "127.0.0.1", 00:35:42.982 "trsvcid": "4420", 00:35:42.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.982 "prchk_reftag": false, 00:35:42.982 "prchk_guard": false, 00:35:42.982 "ctrlr_loss_timeout_sec": 0, 00:35:42.982 "reconnect_delay_sec": 0, 00:35:42.982 "fast_io_fail_timeout_sec": 0, 00:35:42.982 "psk": "key0", 00:35:42.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.982 "hdgst": false, 00:35:42.982 "ddgst": false 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "bdev_nvme_set_hotplug", 00:35:42.982 "params": { 00:35:42.982 "period_us": 100000, 00:35:42.982 "enable": false 00:35:42.982 } 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "method": "bdev_wait_for_examine" 00:35:42.982 } 00:35:42.982 ] 00:35:42.982 }, 00:35:42.982 { 00:35:42.982 "subsystem": "nbd", 00:35:42.982 "config": [] 00:35:42.982 } 00:35:42.982 ] 00:35:42.982 }' 00:35:42.982 12:30:50 keyring_file -- keyring/file.sh@114 -- # killprocess 1173021 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1173021 ']' 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1173021 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173021 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173021' 00:35:42.982 killing process with pid 1173021 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@967 -- # kill 1173021 00:35:42.982 Received shutdown signal, test time was about 1.000000 seconds 00:35:42.982 00:35:42.982 Latency(us) 00:35:42.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.982 =================================================================================================================== 00:35:42.982 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.982 12:30:50 keyring_file -- common/autotest_common.sh@972 -- # wait 1173021 00:35:43.241 12:30:50 keyring_file -- keyring/file.sh@117 -- # bperfpid=1174480 00:35:43.241 12:30:50 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1174480 /var/tmp/bperf.sock 00:35:43.241 12:30:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1174480 ']' 00:35:43.241 12:30:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.241 12:30:50 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:43.241 12:30:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:43.241 12:30:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.241 12:30:50 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:35:43.241 "subsystems": [ 00:35:43.241 { 00:35:43.241 "subsystem": "keyring", 00:35:43.241 "config": [ 00:35:43.241 { 00:35:43.241 "method": "keyring_file_add_key", 00:35:43.241 "params": { 00:35:43.241 "name": "key0", 00:35:43.241 "path": "/tmp/tmp.jFj1ChaSiW" 00:35:43.241 } 00:35:43.241 }, 00:35:43.241 { 00:35:43.241 "method": "keyring_file_add_key", 00:35:43.241 "params": { 00:35:43.241 "name": "key1", 00:35:43.241 "path": "/tmp/tmp.AEUqLyktb8" 00:35:43.241 } 00:35:43.241 } 00:35:43.241 ] 00:35:43.241 }, 00:35:43.241 { 00:35:43.241 "subsystem": "iobuf", 00:35:43.241 "config": [ 00:35:43.241 { 00:35:43.241 "method": "iobuf_set_options", 00:35:43.241 "params": { 00:35:43.241 "small_pool_count": 8192, 00:35:43.241 "large_pool_count": 1024, 00:35:43.241 "small_bufsize": 8192, 00:35:43.241 "large_bufsize": 135168 00:35:43.241 } 00:35:43.241 } 00:35:43.241 ] 00:35:43.241 }, 00:35:43.241 { 00:35:43.241 "subsystem": "sock", 00:35:43.241 "config": [ 00:35:43.241 { 00:35:43.241 "method": "sock_set_default_impl", 00:35:43.241 "params": { 00:35:43.241 "impl_name": "posix" 00:35:43.241 } 00:35:43.241 }, 00:35:43.241 { 00:35:43.241 "method": "sock_impl_set_options", 00:35:43.241 "params": { 00:35:43.241 "impl_name": "ssl", 00:35:43.241 "recv_buf_size": 4096, 00:35:43.241 "send_buf_size": 4096, 00:35:43.241 "enable_recv_pipe": true, 00:35:43.241 "enable_quickack": false, 00:35:43.241 "enable_placement_id": 0, 00:35:43.242 "enable_zerocopy_send_server": true, 00:35:43.242 "enable_zerocopy_send_client": false, 00:35:43.242 "zerocopy_threshold": 0, 00:35:43.242 "tls_version": 0, 00:35:43.242 "enable_ktls": false 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "sock_impl_set_options", 00:35:43.242 "params": { 00:35:43.242 "impl_name": "posix", 00:35:43.242 "recv_buf_size": 2097152, 00:35:43.242 "send_buf_size": 2097152, 00:35:43.242 "enable_recv_pipe": true, 00:35:43.242 "enable_quickack": false, 00:35:43.242 "enable_placement_id": 0, 00:35:43.242 "enable_zerocopy_send_server": true, 00:35:43.242 "enable_zerocopy_send_client": false, 00:35:43.242 "zerocopy_threshold": 0, 00:35:43.242 "tls_version": 0, 00:35:43.242 "enable_ktls": false 00:35:43.242 } 00:35:43.242 } 00:35:43.242 ] 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "subsystem": "vmd", 00:35:43.242 "config": [] 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "subsystem": "accel", 00:35:43.242 "config": [ 00:35:43.242 { 00:35:43.242 "method": "accel_set_options", 00:35:43.242 "params": { 00:35:43.242 "small_cache_size": 128, 00:35:43.242 "large_cache_size": 16, 00:35:43.242 "task_count": 2048, 00:35:43.242 "sequence_count": 2048, 00:35:43.242 "buf_count": 2048 00:35:43.242 } 00:35:43.242 } 00:35:43.242 ] 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "subsystem": "bdev", 00:35:43.242 "config": [ 00:35:43.242 { 00:35:43.242 "method": "bdev_set_options", 00:35:43.242 "params": { 00:35:43.242 "bdev_io_pool_size": 65535, 00:35:43.242 "bdev_io_cache_size": 256, 00:35:43.242 "bdev_auto_examine": true, 00:35:43.242 "iobuf_small_cache_size": 128, 00:35:43.242 "iobuf_large_cache_size": 16 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "bdev_raid_set_options", 00:35:43.242 "params": { 00:35:43.242 "process_window_size_kb": 1024, 00:35:43.242 "process_max_bandwidth_mb_sec": 0 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "bdev_iscsi_set_options", 00:35:43.242 "params": { 00:35:43.242 "timeout_sec": 30 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "bdev_nvme_set_options", 00:35:43.242 "params": { 00:35:43.242 "action_on_timeout": "none", 00:35:43.242 "timeout_us": 0, 00:35:43.242 "timeout_admin_us": 0, 00:35:43.242 "keep_alive_timeout_ms": 10000, 00:35:43.242 "arbitration_burst": 0, 00:35:43.242 "low_priority_weight": 0, 00:35:43.242 "medium_priority_weight": 0, 00:35:43.242 "high_priority_weight": 0, 00:35:43.242 "nvme_adminq_poll_period_us": 10000, 00:35:43.242 "nvme_ioq_poll_period_us": 0, 00:35:43.242 "io_queue_requests": 512, 00:35:43.242 "delay_cmd_submit": true, 00:35:43.242 "transport_retry_count": 4, 00:35:43.242 "bdev_retry_count": 3, 00:35:43.242 "transport_ack_timeout": 0, 00:35:43.242 "ctrlr_loss_timeout_sec": 0, 00:35:43.242 "reconnect_delay_sec": 0, 00:35:43.242 "fast_io_fail_timeout_sec": 0, 00:35:43.242 "disable_auto_failback": false, 00:35:43.242 "generate_uuids": false, 00:35:43.242 "transport_tos": 0, 00:35:43.242 "nvme_error_stat": false, 00:35:43.242 "rdma_srq_size": 0, 00:35:43.242 "io_path_stat": false, 00:35:43.242 "allow_accel_sequence": false, 00:35:43.242 "rdma_max_cq_size": 0, 00:35:43.242 "rdma_cm_event_timeout_ms": 0, 00:35:43.242 "dhchap_digests": [ 00:35:43.242 "sha256", 00:35:43.242 "sha384", 00:35:43.242 "sha512" 00:35:43.242 ], 00:35:43.242 "dhchap_dhgroups": [ 00:35:43.242 "null", 00:35:43.242 "ffdhe2048", 00:35:43.242 "ffdhe3072", 00:35:43.242 "ffdhe4096", 00:35:43.242 "ffdhe6144", 00:35:43.242 "ffdhe8192" 00:35:43.242 ] 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "bdev_nvme_attach_controller", 00:35:43.242 "params": { 00:35:43.242 "name": "nvme0", 00:35:43.242 "trtype": "TCP", 00:35:43.242 "adrfam": "IPv4", 00:35:43.242 "traddr": "127.0.0.1", 00:35:43.242 "trsvcid": "4420", 00:35:43.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.242 "prchk_reftag": false, 00:35:43.242 "prchk_guard": false, 00:35:43.242 "ctrlr_loss_timeout_sec": 0, 00:35:43.242 "reconnect_delay_sec": 0, 00:35:43.242 "fast_io_fail_timeout_sec": 0, 00:35:43.242 "psk": "key0", 00:35:43.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.242 "hdgst": false, 00:35:43.242 "ddgst": false 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "bdev_nvme_set_hotplug", 00:35:43.242 "params": { 00:35:43.242 "period_us": 100000, 00:35:43.242 "enable": false 00:35:43.242 } 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "method": "bdev_wait_for_examine" 00:35:43.242 } 00:35:43.242 ] 00:35:43.242 }, 00:35:43.242 { 00:35:43.242 "subsystem": "nbd", 00:35:43.242 "config": [] 00:35:43.242 } 00:35:43.242 ] 00:35:43.242 }' 00:35:43.242 12:30:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:43.242 12:30:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.242 [2024-07-22 12:30:50.976935] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:35:43.242 [2024-07-22 12:30:50.977016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174480 ] 00:35:43.242 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.242 [2024-07-22 12:30:51.009169] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:43.242 [2024-07-22 12:30:51.038768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.242 [2024-07-22 12:30:51.129245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.500 [2024-07-22 12:30:51.310923] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:44.067 12:30:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:44.067 12:30:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:44.067 12:30:51 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:44.067 12:30:51 keyring_file -- keyring/file.sh@120 -- # jq length 00:35:44.067 12:30:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.325 12:30:52 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:44.325 12:30:52 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:35:44.325 12:30:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.325 12:30:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.325 12:30:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.325 12:30:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.325 12:30:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.583 12:30:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:44.583 12:30:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:35:44.583 12:30:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.583 12:30:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.583 12:30:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.583 12:30:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.583 12:30:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.841 12:30:52 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:44.841 12:30:52 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:44.841 12:30:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:44.841 12:30:52 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:45.098 12:30:52 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:45.098 12:30:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:45.098 12:30:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jFj1ChaSiW /tmp/tmp.AEUqLyktb8 00:35:45.098 12:30:52 keyring_file -- keyring/file.sh@20 -- # killprocess 1174480 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1174480 ']' 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1174480 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1174480 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1174480' 00:35:45.098 killing process with pid 1174480 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@967 -- # kill 1174480 00:35:45.098 Received shutdown signal, test time was about 1.000000 seconds 00:35:45.098 00:35:45.098 Latency(us) 00:35:45.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.098 =================================================================================================================== 00:35:45.098 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:45.098 12:30:52 keyring_file -- common/autotest_common.sh@972 -- # wait 1174480 00:35:45.354 12:30:53 keyring_file -- keyring/file.sh@21 -- # killprocess 1173015 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1173015 ']' 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1173015 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173015 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173015' 00:35:45.354 killing process with pid 1173015 00:35:45.354 12:30:53 keyring_file -- common/autotest_common.sh@967 -- # kill 1173015 00:35:45.355 [2024-07-22 12:30:53.176328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:45.355 12:30:53 keyring_file -- common/autotest_common.sh@972 -- # wait 1173015 00:35:45.919 00:35:45.919 real 0m14.091s 00:35:45.919 user 0m35.038s 00:35:45.919 sys 0m3.268s 00:35:45.919 12:30:53 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:45.919 12:30:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.919 ************************************ 00:35:45.919 END TEST keyring_file 00:35:45.919 ************************************ 00:35:45.919 12:30:53 -- common/autotest_common.sh@1142 -- # return 0 00:35:45.919 12:30:53 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:35:45.919 12:30:53 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:45.919 12:30:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:45.919 12:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:45.919 12:30:53 -- common/autotest_common.sh@10 -- # set +x 00:35:45.919 ************************************ 00:35:45.919 START TEST keyring_linux 00:35:45.919 ************************************ 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:45.919 * Looking for test storage... 00:35:45.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.919 12:30:53 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.919 12:30:53 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.919 12:30:53 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.919 12:30:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.919 12:30:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.919 12:30:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.919 12:30:53 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:45.919 12:30:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:45.919 /tmp/:spdk-test:key0 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:45.919 12:30:53 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:45.919 12:30:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:45.919 /tmp/:spdk-test:key1 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1174841 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:45.919 12:30:53 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1174841 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1174841 ']' 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:45.919 12:30:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:45.919 [2024-07-22 12:30:53.812240] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:35:45.919 [2024-07-22 12:30:53.812331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174841 ] 00:35:45.919 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.920 [2024-07-22 12:30:53.843389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:46.177 [2024-07-22 12:30:53.869681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.177 [2024-07-22 12:30:53.957575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:35:46.435 12:30:54 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.435 [2024-07-22 12:30:54.223568] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.435 null0 00:35:46.435 [2024-07-22 12:30:54.255641] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:46.435 [2024-07-22 12:30:54.256177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.435 12:30:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:46.435 57763214 00:35:46.435 12:30:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:46.435 48560650 00:35:46.435 12:30:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1174966 00:35:46.435 12:30:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:46.435 12:30:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1174966 /var/tmp/bperf.sock 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1174966 ']' 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:46.435 12:30:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.435 [2024-07-22 12:30:54.321769] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 24.07.0-rc2 initialization... 00:35:46.435 [2024-07-22 12:30:54.321835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174966 ] 00:35:46.435 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.435 [2024-07-22 12:30:54.354398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:46.693 [2024-07-22 12:30:54.384472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.693 [2024-07-22 12:30:54.475849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.693 12:30:54 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:46.693 12:30:54 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:35:46.693 12:30:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:46.693 12:30:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:46.950 12:30:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:46.950 12:30:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.207 12:30:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.208 12:30:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.464 [2024-07-22 12:30:55.341121] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:47.722 nvme0n1 00:35:47.722 12:30:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:47.722 12:30:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:47.722 12:30:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:47.722 12:30:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:47.722 12:30:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:47.722 12:30:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.979 12:30:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:47.979 12:30:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:47.979 12:30:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:47.979 12:30:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:47.979 12:30:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.979 12:30:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.979 12:30:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@25 -- # sn=57763214 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 57763214 == \5\7\7\6\3\2\1\4 ]] 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 57763214 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:48.236 12:30:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.236 Running I/O for 1 seconds... 00:35:49.169 00:35:49.169 Latency(us) 00:35:49.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.169 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:49.169 nvme0n1 : 1.02 5170.87 20.20 0.00 0.00 24554.87 12718.84 37282.70 00:35:49.169 =================================================================================================================== 00:35:49.169 Total : 5170.87 20.20 0.00 0.00 24554.87 12718.84 37282.70 00:35:49.169 0 00:35:49.169 12:30:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:49.169 12:30:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:49.426 12:30:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:49.426 12:30:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:49.426 12:30:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:49.426 12:30:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:49.426 12:30:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.426 12:30:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:49.693 12:30:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:49.693 12:30:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:49.693 12:30:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:49.693 12:30:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.693 12:30:57 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.693 12:30:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.950 [2024-07-22 12:30:57.819133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:49.950 [2024-07-22 12:30:57.820042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2441690 (107): Transport endpoint is not connected 00:35:49.950 [2024-07-22 12:30:57.821033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2441690 (9): Bad file descriptor 00:35:49.950 [2024-07-22 12:30:57.822031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:49.950 [2024-07-22 12:30:57.822053] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:49.950 [2024-07-22 12:30:57.822069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:49.950 request: 00:35:49.950 { 00:35:49.950 "name": "nvme0", 00:35:49.950 "trtype": "tcp", 00:35:49.950 "traddr": "127.0.0.1", 00:35:49.950 "adrfam": "ipv4", 00:35:49.950 "trsvcid": "4420", 00:35:49.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.950 "prchk_reftag": false, 00:35:49.950 "prchk_guard": false, 00:35:49.950 "hdgst": false, 00:35:49.950 "ddgst": false, 00:35:49.950 "psk": ":spdk-test:key1", 00:35:49.950 "method": "bdev_nvme_attach_controller", 00:35:49.950 "req_id": 1 00:35:49.950 } 00:35:49.950 Got JSON-RPC error response 00:35:49.950 response: 00:35:49.950 { 00:35:49.950 "code": -5, 00:35:49.950 "message": "Input/output error" 00:35:49.950 } 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@33 -- # sn=57763214 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 57763214 00:35:49.951 1 links removed 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@33 -- # sn=48560650 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 48560650 00:35:49.951 1 links removed 00:35:49.951 12:30:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1174966 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1174966 ']' 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1174966 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1174966 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1174966' 00:35:49.951 killing process with pid 1174966 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 1174966 00:35:49.951 Received shutdown signal, test time was about 1.000000 seconds 00:35:49.951 00:35:49.951 Latency(us) 00:35:49.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.951 =================================================================================================================== 00:35:49.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:49.951 12:30:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 1174966 00:35:50.208 12:30:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1174841 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1174841 ']' 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1174841 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1174841 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1174841' 00:35:50.208 killing process with pid 1174841 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@967 -- # kill 1174841 00:35:50.208 12:30:58 keyring_linux -- common/autotest_common.sh@972 -- # wait 1174841 00:35:50.810 00:35:50.810 real 0m4.852s 00:35:50.810 user 0m9.141s 00:35:50.810 sys 0m1.554s 00:35:50.810 12:30:58 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:50.810 12:30:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:50.810 ************************************ 00:35:50.810 END TEST keyring_linux 00:35:50.810 ************************************ 00:35:50.810 12:30:58 -- common/autotest_common.sh@1142 -- # return 0 00:35:50.810 12:30:58 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:35:50.810 12:30:58 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:35:50.810 12:30:58 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:50.810 12:30:58 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:50.810 12:30:58 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:50.810 12:30:58 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:35:50.810 12:30:58 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:35:50.810 12:30:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:50.810 12:30:58 -- common/autotest_common.sh@10 -- # set +x 00:35:50.810 12:30:58 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:35:50.810 12:30:58 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:50.810 12:30:58 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:50.810 12:30:58 -- common/autotest_common.sh@10 -- # set +x 00:35:52.708 INFO: APP EXITING 00:35:52.708 INFO: killing all VMs 00:35:52.708 INFO: killing vhost app 00:35:52.708 INFO: EXIT DONE 00:35:53.642 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:35:53.642 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:53.642 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:53.642 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:53.642 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:53.642 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:53.642 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:53.642 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:53.642 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:53.642 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:53.642 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:53.642 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:53.642 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:53.642 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:53.642 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:53.642 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:53.642 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:55.013 Cleaning 00:35:55.013 Removing: /var/run/dpdk/spdk0/config 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:55.013 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:55.013 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:55.013 Removing: /var/run/dpdk/spdk1/config 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:55.013 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:55.013 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:55.013 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:55.013 Removing: /var/run/dpdk/spdk2/config 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:55.013 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:55.013 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:55.013 Removing: /var/run/dpdk/spdk3/config 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:55.013 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:55.013 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:55.013 Removing: /var/run/dpdk/spdk4/config 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:55.013 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:55.013 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:55.013 Removing: /dev/shm/bdev_svc_trace.1 00:35:55.013 Removing: /dev/shm/nvmf_trace.0 00:35:55.013 Removing: /dev/shm/spdk_tgt_trace.pid854744 00:35:55.013 Removing: /var/run/dpdk/spdk0 00:35:55.013 Removing: /var/run/dpdk/spdk1 00:35:55.013 Removing: /var/run/dpdk/spdk2 00:35:55.013 Removing: /var/run/dpdk/spdk3 00:35:55.013 Removing: /var/run/dpdk/spdk4 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1022375 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1025168 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1026332 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1027645 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1027779 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1027800 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1027933 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1028362 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1029563 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1030279 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1030628 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1032288 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1032645 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1033205 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1035592 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1038855 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1042994 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1065905 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1069281 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1072911 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1073853 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1074940 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1077486 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1079714 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1083919 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1083922 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1086684 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1086820 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1086957 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1087334 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1087345 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1088424 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1089601 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1090779 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1091955 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1093141 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1094429 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1098111 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1098458 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1099953 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1101051 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1104902 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1106756 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1110153 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1113480 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1119629 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1123898 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1123906 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1136711 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1137113 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1137525 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1137969 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1138512 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1138925 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1139439 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1139843 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1142340 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1142481 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1146267 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1146440 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1148039 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1152947 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1153049 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1155847 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1157248 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1158642 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1159504 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1160904 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1161673 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1167612 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1167951 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1168344 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1169896 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1170226 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1170575 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1173015 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1173021 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1174480 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1174841 00:35:55.013 Removing: /var/run/dpdk/spdk_pid1174966 00:35:55.013 Removing: /var/run/dpdk/spdk_pid853187 00:35:55.013 Removing: /var/run/dpdk/spdk_pid853917 00:35:55.013 Removing: /var/run/dpdk/spdk_pid854744 00:35:55.013 Removing: /var/run/dpdk/spdk_pid855170 00:35:55.013 Removing: /var/run/dpdk/spdk_pid855867 00:35:55.013 Removing: /var/run/dpdk/spdk_pid856007 00:35:55.013 Removing: /var/run/dpdk/spdk_pid856719 00:35:55.013 Removing: /var/run/dpdk/spdk_pid856731 00:35:55.013 Removing: /var/run/dpdk/spdk_pid856973 00:35:55.013 Removing: /var/run/dpdk/spdk_pid858269 00:35:55.013 Removing: /var/run/dpdk/spdk_pid859209 00:35:55.013 Removing: /var/run/dpdk/spdk_pid859441 00:35:55.013 Removing: /var/run/dpdk/spdk_pid859704 00:35:55.013 Removing: /var/run/dpdk/spdk_pid859904 00:35:55.013 Removing: /var/run/dpdk/spdk_pid860094 00:35:55.013 Removing: /var/run/dpdk/spdk_pid860307 00:35:55.013 Removing: /var/run/dpdk/spdk_pid860539 00:35:55.013 Removing: /var/run/dpdk/spdk_pid860726 00:35:55.013 Removing: /var/run/dpdk/spdk_pid861046 00:35:55.013 Removing: /var/run/dpdk/spdk_pid863904 00:35:55.013 Removing: /var/run/dpdk/spdk_pid864082 00:35:55.013 Removing: /var/run/dpdk/spdk_pid864325 00:35:55.013 Removing: /var/run/dpdk/spdk_pid864351 00:35:55.013 Removing: /var/run/dpdk/spdk_pid864664 00:35:55.013 Removing: /var/run/dpdk/spdk_pid864792 00:35:55.013 Removing: /var/run/dpdk/spdk_pid865098 00:35:55.013 Removing: /var/run/dpdk/spdk_pid865222 00:35:55.013 Removing: /var/run/dpdk/spdk_pid865395 00:35:55.013 Removing: /var/run/dpdk/spdk_pid865407 00:35:55.013 Removing: /var/run/dpdk/spdk_pid865668 00:35:55.013 Removing: /var/run/dpdk/spdk_pid865701 00:35:55.013 Removing: /var/run/dpdk/spdk_pid866068 00:35:55.013 Removing: /var/run/dpdk/spdk_pid866220 00:35:55.013 Removing: /var/run/dpdk/spdk_pid866447 00:35:55.013 Removing: /var/run/dpdk/spdk_pid866589 00:35:55.013 Removing: /var/run/dpdk/spdk_pid866725 00:35:55.013 Removing: /var/run/dpdk/spdk_pid866802 00:35:55.013 Removing: /var/run/dpdk/spdk_pid867068 00:35:55.013 Removing: /var/run/dpdk/spdk_pid867227 00:35:55.013 Removing: /var/run/dpdk/spdk_pid867388 00:35:55.013 Removing: /var/run/dpdk/spdk_pid867541 00:35:55.013 Removing: /var/run/dpdk/spdk_pid867813 00:35:55.013 Removing: /var/run/dpdk/spdk_pid867968 00:35:55.013 Removing: /var/run/dpdk/spdk_pid868133 00:35:55.013 Removing: /var/run/dpdk/spdk_pid868319 00:35:55.013 Removing: /var/run/dpdk/spdk_pid868558 00:35:55.013 Removing: /var/run/dpdk/spdk_pid868722 00:35:55.013 Removing: /var/run/dpdk/spdk_pid868877 00:35:55.013 Removing: /var/run/dpdk/spdk_pid869149 00:35:55.013 Removing: /var/run/dpdk/spdk_pid869302 00:35:55.013 Removing: /var/run/dpdk/spdk_pid869468 00:35:55.013 Removing: /var/run/dpdk/spdk_pid869622 00:35:55.014 Removing: /var/run/dpdk/spdk_pid869894 00:35:55.014 Removing: /var/run/dpdk/spdk_pid870052 00:35:55.014 Removing: /var/run/dpdk/spdk_pid870220 00:35:55.014 Removing: /var/run/dpdk/spdk_pid870488 00:35:55.014 Removing: /var/run/dpdk/spdk_pid870640 00:35:55.014 Removing: /var/run/dpdk/spdk_pid870722 00:35:55.014 Removing: /var/run/dpdk/spdk_pid870926 00:35:55.014 Removing: /var/run/dpdk/spdk_pid873097 00:35:55.014 Removing: /var/run/dpdk/spdk_pid926651 00:35:55.014 Removing: /var/run/dpdk/spdk_pid929245 00:35:55.014 Removing: /var/run/dpdk/spdk_pid936081 00:35:55.014 Removing: /var/run/dpdk/spdk_pid939365 00:35:55.270 Removing: /var/run/dpdk/spdk_pid941661 00:35:55.270 Removing: /var/run/dpdk/spdk_pid942116 00:35:55.270 Removing: /var/run/dpdk/spdk_pid945947 00:35:55.270 Removing: /var/run/dpdk/spdk_pid949787 00:35:55.270 Removing: /var/run/dpdk/spdk_pid949804 00:35:55.270 Removing: /var/run/dpdk/spdk_pid950445 00:35:55.270 Removing: /var/run/dpdk/spdk_pid951176 00:35:55.270 Removing: /var/run/dpdk/spdk_pid951750 00:35:55.270 Removing: /var/run/dpdk/spdk_pid952657 00:35:55.270 Removing: /var/run/dpdk/spdk_pid952774 00:35:55.270 Removing: /var/run/dpdk/spdk_pid952920 00:35:55.270 Removing: /var/run/dpdk/spdk_pid953047 00:35:55.270 Removing: /var/run/dpdk/spdk_pid953055 00:35:55.270 Removing: /var/run/dpdk/spdk_pid953712 00:35:55.270 Removing: /var/run/dpdk/spdk_pid954290 00:35:55.270 Removing: /var/run/dpdk/spdk_pid954904 00:35:55.270 Removing: /var/run/dpdk/spdk_pid955306 00:35:55.270 Removing: /var/run/dpdk/spdk_pid955318 00:35:55.270 Removing: /var/run/dpdk/spdk_pid955572 00:35:55.270 Removing: /var/run/dpdk/spdk_pid956451 00:35:55.270 Removing: /var/run/dpdk/spdk_pid957173 00:35:55.270 Removing: /var/run/dpdk/spdk_pid962528 00:35:55.270 Removing: /var/run/dpdk/spdk_pid962806 00:35:55.270 Removing: /var/run/dpdk/spdk_pid965303 00:35:55.270 Removing: /var/run/dpdk/spdk_pid968884 00:35:55.270 Removing: /var/run/dpdk/spdk_pid971046 00:35:55.270 Removing: /var/run/dpdk/spdk_pid977298 00:35:55.270 Removing: /var/run/dpdk/spdk_pid982478 00:35:55.270 Removing: /var/run/dpdk/spdk_pid983789 00:35:55.270 Removing: /var/run/dpdk/spdk_pid984958 00:35:55.270 Removing: /var/run/dpdk/spdk_pid995136 00:35:55.270 Removing: /var/run/dpdk/spdk_pid997222 00:35:55.270 Clean 00:35:55.270 12:31:03 -- common/autotest_common.sh@1451 -- # return 0 00:35:55.270 12:31:03 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:35:55.270 12:31:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:55.270 12:31:03 -- common/autotest_common.sh@10 -- # set +x 00:35:55.270 12:31:03 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:35:55.270 12:31:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:55.270 12:31:03 -- common/autotest_common.sh@10 -- # set +x 00:35:55.270 12:31:03 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:55.270 12:31:03 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:55.270 12:31:03 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:55.270 12:31:03 -- spdk/autotest.sh@391 -- # hash lcov 00:35:55.270 12:31:03 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:55.270 12:31:03 -- spdk/autotest.sh@393 -- # hostname 00:35:55.270 12:31:03 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:55.526 geninfo: WARNING: invalid characters removed from testname! 00:36:27.593 12:31:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:27.593 12:31:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:30.124 12:31:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:32.656 12:31:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:35.974 12:31:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.503 12:31:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:41.792 12:31:49 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:41.792 12:31:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.792 12:31:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:41.792 12:31:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.792 12:31:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.792 12:31:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.792 12:31:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.792 12:31:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.792 12:31:49 -- paths/export.sh@5 -- $ export PATH 00:36:41.792 12:31:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.792 12:31:49 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:41.792 12:31:49 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:41.792 12:31:49 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721644309.XXXXXX 00:36:41.792 12:31:49 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721644309.oztRhK 00:36:41.792 12:31:49 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:41.792 12:31:49 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:36:41.792 12:31:49 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:41.792 12:31:49 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:41.792 12:31:49 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:41.792 12:31:49 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:41.792 12:31:49 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:41.792 12:31:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:36:41.792 12:31:49 -- common/autotest_common.sh@10 -- $ set +x 00:36:41.792 12:31:49 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:41.792 12:31:49 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:41.792 12:31:49 -- pm/common@17 -- $ local monitor 00:36:41.792 12:31:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:41.792 12:31:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:41.792 12:31:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:41.792 12:31:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:41.792 12:31:49 -- pm/common@21 -- $ date +%s 00:36:41.792 12:31:49 -- pm/common@25 -- $ sleep 1 00:36:41.792 12:31:49 -- pm/common@21 -- $ date +%s 00:36:41.792 12:31:49 -- pm/common@21 -- $ date +%s 00:36:41.792 12:31:49 -- pm/common@21 -- $ date +%s 00:36:41.792 12:31:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721644309 00:36:41.792 12:31:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721644309 00:36:41.792 12:31:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721644309 00:36:41.792 12:31:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721644309 00:36:41.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721644309_collect-vmstat.pm.log 00:36:41.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721644309_collect-cpu-load.pm.log 00:36:41.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721644309_collect-cpu-temp.pm.log 00:36:41.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721644309_collect-bmc-pm.bmc.pm.log 00:36:42.729 12:31:50 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:42.729 12:31:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:36:42.729 12:31:50 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:42.729 12:31:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:42.729 12:31:50 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:42.729 12:31:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:42.729 12:31:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:42.729 12:31:50 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:42.729 12:31:50 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:42.729 12:31:50 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:42.729 12:31:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:42.729 12:31:50 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:42.729 12:31:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:42.729 12:31:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:42.729 12:31:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:42.729 12:31:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:42.729 12:31:50 -- pm/common@44 -- $ pid=1186092 00:36:42.729 12:31:50 -- pm/common@50 -- $ kill -TERM 1186092 00:36:42.729 12:31:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:42.729 12:31:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:42.729 12:31:50 -- pm/common@44 -- $ pid=1186094 00:36:42.729 12:31:50 -- pm/common@50 -- $ kill -TERM 1186094 00:36:42.729 12:31:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:42.729 12:31:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:42.729 12:31:50 -- pm/common@44 -- $ pid=1186096 00:36:42.729 12:31:50 -- pm/common@50 -- $ kill -TERM 1186096 00:36:42.729 12:31:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:42.729 12:31:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:42.729 12:31:50 -- pm/common@44 -- $ pid=1186121 00:36:42.729 12:31:50 -- pm/common@50 -- $ sudo -E kill -TERM 1186121 00:36:42.729 + [[ -n 753556 ]] 00:36:42.729 + sudo kill 753556 00:36:42.739 [Pipeline] } 00:36:42.757 [Pipeline] // stage 00:36:42.763 [Pipeline] } 00:36:42.780 [Pipeline] // timeout 00:36:42.785 [Pipeline] } 00:36:42.800 [Pipeline] // catchError 00:36:42.805 [Pipeline] } 00:36:42.821 [Pipeline] // wrap 00:36:42.827 [Pipeline] } 00:36:42.842 [Pipeline] // catchError 00:36:42.850 [Pipeline] stage 00:36:42.853 [Pipeline] { (Epilogue) 00:36:42.868 [Pipeline] catchError 00:36:42.870 [Pipeline] { 00:36:42.884 [Pipeline] echo 00:36:42.886 Cleanup processes 00:36:42.892 [Pipeline] sh 00:36:43.178 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:43.178 1186253 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:43.178 1186357 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:43.193 [Pipeline] sh 00:36:43.477 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:43.477 ++ grep -v 'sudo pgrep' 00:36:43.477 ++ awk '{print $1}' 00:36:43.477 + sudo kill -9 1186253 00:36:43.490 [Pipeline] sh 00:36:43.774 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:53.761 [Pipeline] sh 00:36:54.074 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:54.074 Artifacts sizes are good 00:36:54.091 [Pipeline] archiveArtifacts 00:36:54.098 Archiving artifacts 00:36:54.330 [Pipeline] sh 00:36:54.618 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:54.633 [Pipeline] cleanWs 00:36:54.643 [WS-CLEANUP] Deleting project workspace... 00:36:54.643 [WS-CLEANUP] Deferred wipeout is used... 00:36:54.650 [WS-CLEANUP] done 00:36:54.652 [Pipeline] } 00:36:54.695 [Pipeline] // catchError 00:36:54.709 [Pipeline] sh 00:36:54.990 + logger -p user.info -t JENKINS-CI 00:36:55.000 [Pipeline] } 00:36:55.020 [Pipeline] // stage 00:36:55.028 [Pipeline] } 00:36:55.047 [Pipeline] // node 00:36:55.051 [Pipeline] End of Pipeline 00:36:55.088 Finished: SUCCESS